Test Report: QEMU_macOS 19876

                    
                      0db15b506654906b6081fade5258c34c52419f7c:2024-10-28:36841
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.87
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.99
48 TestCertOptions 10.2
49 TestCertExpiration 195.45
50 TestDockerFlags 10.09
51 TestForceSystemdFlag 10.12
52 TestForceSystemdEnv 10.71
84 TestFunctional/serial/ComponentHealth 0.92
97 TestFunctional/parallel/ServiceCmdConnect 33.32
162 TestMultiControlPlane/serial/StartCluster 725.38
163 TestMultiControlPlane/serial/DeployApp 90.29
164 TestMultiControlPlane/serial/PingHostFromPods 0.1
165 TestMultiControlPlane/serial/AddWorkerNode 0.09
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
169 TestMultiControlPlane/serial/StopSecondaryNode 0.12
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
171 TestMultiControlPlane/serial/RestartSecondaryNode 0.16
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 983.54
184 TestJSONOutput/start/Command 725.26
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.09
196 TestJSONOutput/unpause/Command 0.06
216 TestMountStart/serial/StartWithMountFirst 10.13
219 TestMultiNode/serial/FreshStart2Nodes 9.84
220 TestMultiNode/serial/DeployApp2Nodes 76.9
221 TestMultiNode/serial/PingHostFrom2Pods 0.1
222 TestMultiNode/serial/AddNode 0.08
223 TestMultiNode/serial/MultiNodeLabels 0.07
224 TestMultiNode/serial/ProfileList 0.09
225 TestMultiNode/serial/CopyFile 0.07
226 TestMultiNode/serial/StopNode 0.16
227 TestMultiNode/serial/StartAfterStop 45.51
228 TestMultiNode/serial/RestartKeepsNodes 8.99
229 TestMultiNode/serial/DeleteNode 0.11
230 TestMultiNode/serial/StopMultiNode 3.4
231 TestMultiNode/serial/RestartMultiNode 5.27
232 TestMultiNode/serial/ValidateNameConflict 20.02
236 TestPreload 10.1
238 TestScheduledStopUnix 10.05
239 TestSkaffold 12.4
242 TestRunningBinaryUpgrade 593.93
244 TestKubernetesUpgrade 17.47
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.15
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.91
260 TestStoppedBinaryUpgrade/Upgrade 574.69
262 TestPause/serial/Start 9.83
272 TestNoKubernetes/serial/StartWithK8s 9.93
273 TestNoKubernetes/serial/StartWithStopK8s 5.3
274 TestNoKubernetes/serial/Start 5.32
278 TestNoKubernetes/serial/StartNoArgs 5.32
280 TestNetworkPlugins/group/auto/Start 10.01
281 TestNetworkPlugins/group/kindnet/Start 9.77
282 TestNetworkPlugins/group/calico/Start 9.91
283 TestNetworkPlugins/group/custom-flannel/Start 9.84
284 TestNetworkPlugins/group/false/Start 9.76
285 TestNetworkPlugins/group/enable-default-cni/Start 9.79
286 TestNetworkPlugins/group/flannel/Start 9.77
287 TestNetworkPlugins/group/bridge/Start 9.84
288 TestNetworkPlugins/group/kubenet/Start 10.07
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.9
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.22
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
300 TestStartStop/group/old-k8s-version/serial/Pause 0.11
302 TestStartStop/group/no-preload/serial/FirstStart 9.77
303 TestStartStop/group/no-preload/serial/DeployApp 0.1
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 5.32
309 TestStartStop/group/embed-certs/serial/FirstStart 10.86
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
313 TestStartStop/group/no-preload/serial/Pause 0.11
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10
316 TestStartStop/group/embed-certs/serial/DeployApp 0.1
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
320 TestStartStop/group/embed-certs/serial/SecondStart 6.12
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.28
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
329 TestStartStop/group/embed-certs/serial/Pause 0.11
331 TestStartStop/group/newest-cni/serial/FirstStart 10.07
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
340 TestStartStop/group/newest-cni/serial/SecondStart 5.26
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (18.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-381000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-381000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (18.87325375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"30fa249f-bd29-43cd-9d1d-7b57ca9e1ca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-381000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b2ffa07-0988-4f17-94ea-19d536a62b72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"8ec462c1-72e6-4382-84fa-cf3b306efec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig"}}
	{"specversion":"1.0","id":"61c9110b-881d-427c-a71b-bf29bfd0badb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d07e084f-6ae1-4902-8472-b9850bd5ee9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0eaa15ba-c815-4e22-b9a5-f1eed5b732bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube"}}
	{"specversion":"1.0","id":"26abc2d3-08cc-432c-a742-77d8746b152d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"b74bbd25-4a89-4ad6-8710-62abe891f894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8af07a8e-0030-44cf-833e-258e4b727488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4b2d042f-79f6-4f14-8568-4b01be159432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"43d16306-3b2b-4703-99dc-32e7fcd85ddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-381000\" primary control-plane node in \"download-only-381000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bb0e6c5-1c48-4861-be9a-c6b69e810aa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d58b452-5e9d-4443-97d0-1761925622d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320] Decompressors:map[bz2:0x14000125630 gz:0x14000125638 tar:0x140001255e0 tar.bz2:0x140001255f0 tar.gz:0x14000125600 tar.xz:0x14000125610 tar.zst:0x14000125620 tbz2:0x140001255f0 tgz:0x14
000125600 txz:0x14000125610 tzst:0x14000125620 xz:0x14000125640 zip:0x14000125650 zst:0x14000125648] Getters:map[file:0x140018505a0 http:0x140006f20f0 https:0x140006f2140] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"07250ce3-bf04-4c31-b8e5-ab1282df18a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 03:39:51.282165    1599 out.go:345] Setting OutFile to fd 1 ...
	I1028 03:39:51.282324    1599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:39:51.282328    1599 out.go:358] Setting ErrFile to fd 2...
	I1028 03:39:51.282330    1599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:39:51.282454    1599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	W1028 03:39:51.282563    1599 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19876-1087/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19876-1087/.minikube/config/config.json: no such file or directory
	I1028 03:39:51.283907    1599 out.go:352] Setting JSON to true
	I1028 03:39:51.302773    1599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":562,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 03:39:51.302849    1599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 03:39:51.308122    1599 out.go:97] [download-only-381000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 03:39:51.308279    1599 notify.go:220] Checking for updates...
	W1028 03:39:51.308335    1599 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 03:39:51.312058    1599 out.go:169] MINIKUBE_LOCATION=19876
	I1028 03:39:51.315164    1599 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:39:51.319074    1599 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 03:39:51.322120    1599 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 03:39:51.325159    1599 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	W1028 03:39:51.331074    1599 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 03:39:51.331291    1599 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 03:39:51.335143    1599 out.go:97] Using the qemu2 driver based on user configuration
	I1028 03:39:51.335162    1599 start.go:297] selected driver: qemu2
	I1028 03:39:51.335182    1599 start.go:901] validating driver "qemu2" against <nil>
	I1028 03:39:51.335233    1599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 03:39:51.338995    1599 out.go:169] Automatically selected the socket_vmnet network
	I1028 03:39:51.344993    1599 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1028 03:39:51.345082    1599 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 03:39:51.345139    1599 cni.go:84] Creating CNI manager for ""
	I1028 03:39:51.345184    1599 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 03:39:51.345250    1599 start.go:340] cluster config:
	{Name:download-only-381000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-381000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:39:51.349841    1599 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 03:39:51.354150    1599 out.go:97] Downloading VM boot image ...
	I1028 03:39:51.354167    1599 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso
	I1028 03:39:58.479399    1599 out.go:97] Starting "download-only-381000" primary control-plane node in "download-only-381000" cluster
	I1028 03:39:58.479430    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 03:39:58.538338    1599 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 03:39:58.538359    1599 cache.go:56] Caching tarball of preloaded images
	I1028 03:39:58.538580    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 03:39:58.542789    1599 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 03:39:58.542795    1599 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 03:39:58.622388    1599 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 03:40:08.899879    1599 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 03:40:08.900059    1599 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 03:40:09.593343    1599 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 03:40:09.593603    1599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/download-only-381000/config.json ...
	I1028 03:40:09.593620    1599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/download-only-381000/config.json: {Name:mk2a7c67cc474f3017fb2a3152723a48ce971025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:40:09.593906    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 03:40:09.594160    1599 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1028 03:40:10.075842    1599 out.go:193] 
	W1028 03:40:10.079959    1599 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320] Decompressors:map[bz2:0x14000125630 gz:0x14000125638 tar:0x140001255e0 tar.bz2:0x140001255f0 tar.gz:0x14000125600 tar.xz:0x14000125610 tar.zst:0x14000125620 tbz2:0x140001255f0 tgz:0x14000125600 txz:0x14000125610 tzst:0x14000125620 xz:0x14000125640 zip:0x14000125650 zst:0x14000125648] Getters:map[file:0x140018505a0 http:0x140006f20f0 https:0x140006f2140] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1028 03:40:10.079987    1599 out_reason.go:110] 
	W1028 03:40:10.087842    1599 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 03:40:10.091822    1599 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-381000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (18.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-770000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-770000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.829998667s)

                                                
                                                
-- stdout --
	* [offline-docker-770000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-770000" primary control-plane node in "offline-docker-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:38:36.778038    4602 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:38:36.778182    4602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:36.778185    4602 out.go:358] Setting ErrFile to fd 2...
	I1028 04:38:36.778188    4602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:36.778298    4602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:38:36.779657    4602 out.go:352] Setting JSON to false
	I1028 04:38:36.799578    4602 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4087,"bootTime":1730111429,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:38:36.799651    4602 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:38:36.804855    4602 out.go:177] * [offline-docker-770000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:38:36.812625    4602 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:38:36.812625    4602 notify.go:220] Checking for updates...
	I1028 04:38:36.818579    4602 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:38:36.821614    4602 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:38:36.824601    4602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:38:36.827661    4602 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:38:36.834594    4602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:38:36.837988    4602 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:38:36.838043    4602 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:38:36.841594    4602 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:38:36.848599    4602 start.go:297] selected driver: qemu2
	I1028 04:38:36.848620    4602 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:38:36.848634    4602 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:38:36.850819    4602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:38:36.853590    4602 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:38:36.856638    4602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:38:36.856657    4602 cni.go:84] Creating CNI manager for ""
	I1028 04:38:36.856680    4602 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:38:36.856684    4602 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:38:36.856723    4602 start.go:340] cluster config:
	{Name:offline-docker-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:38:36.861228    4602 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:36.869580    4602 out.go:177] * Starting "offline-docker-770000" primary control-plane node in "offline-docker-770000" cluster
	I1028 04:38:36.873594    4602 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:38:36.873624    4602 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:38:36.873632    4602 cache.go:56] Caching tarball of preloaded images
	I1028 04:38:36.873740    4602 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:38:36.873746    4602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:38:36.873818    4602 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/offline-docker-770000/config.json ...
	I1028 04:38:36.873830    4602 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/offline-docker-770000/config.json: {Name:mk831a42a4ddef1aaf8feebbc1adf4aea31a8773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:38:36.874152    4602 start.go:360] acquireMachinesLock for offline-docker-770000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:36.874208    4602 start.go:364] duration metric: took 44µs to acquireMachinesLock for "offline-docker-770000"
	I1028 04:38:36.874221    4602 start.go:93] Provisioning new machine with config: &{Name:offline-docker-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:36.874252    4602 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:36.877579    4602 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 04:38:36.892963    4602 start.go:159] libmachine.API.Create for "offline-docker-770000" (driver="qemu2")
	I1028 04:38:36.893012    4602 client.go:168] LocalClient.Create starting
	I1028 04:38:36.893091    4602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:36.893131    4602 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:36.893144    4602 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:36.893189    4602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:36.893217    4602 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:36.893233    4602 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:36.893678    4602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:38:37.050532    4602 main.go:141] libmachine: Creating SSH key...
	I1028 04:38:37.109745    4602 main.go:141] libmachine: Creating Disk image...
	I1028 04:38:37.109754    4602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:38:37.110016    4602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2
	I1028 04:38:37.120478    4602 main.go:141] libmachine: STDOUT: 
	I1028 04:38:37.120502    4602 main.go:141] libmachine: STDERR: 
	I1028 04:38:37.120572    4602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2 +20000M
	I1028 04:38:37.129888    4602 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:38:37.129909    4602 main.go:141] libmachine: STDERR: 
	I1028 04:38:37.129923    4602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2
	I1028 04:38:37.129931    4602 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:38:37.129942    4602 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:38:37.129968    4602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:cb:93:ae:3c:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2
	I1028 04:38:37.132152    4602 main.go:141] libmachine: STDOUT: 
	I1028 04:38:37.132193    4602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:38:37.132228    4602 client.go:171] duration metric: took 239.204542ms to LocalClient.Create
	I1028 04:38:39.132647    4602 start.go:128] duration metric: took 2.258374083s to createHost
	I1028 04:38:39.132664    4602 start.go:83] releasing machines lock for "offline-docker-770000", held for 2.258438666s
	W1028 04:38:39.132671    4602 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:39.139968    4602 out.go:177] * Deleting "offline-docker-770000" in qemu2 ...
	W1028 04:38:39.155537    4602 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:39.155553    4602 start.go:729] Will try again in 5 seconds ...
	I1028 04:38:44.157765    4602 start.go:360] acquireMachinesLock for offline-docker-770000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:44.158161    4602 start.go:364] duration metric: took 299.708µs to acquireMachinesLock for "offline-docker-770000"
	I1028 04:38:44.158605    4602 start.go:93] Provisioning new machine with config: &{Name:offline-docker-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:44.158850    4602 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:44.168568    4602 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 04:38:44.210526    4602 start.go:159] libmachine.API.Create for "offline-docker-770000" (driver="qemu2")
	I1028 04:38:44.210589    4602 client.go:168] LocalClient.Create starting
	I1028 04:38:44.210742    4602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:44.210825    4602 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:44.210840    4602 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:44.210938    4602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:44.210997    4602 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:44.211009    4602 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:44.211684    4602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:38:44.380576    4602 main.go:141] libmachine: Creating SSH key...
	I1028 04:38:44.512381    4602 main.go:141] libmachine: Creating Disk image...
	I1028 04:38:44.512388    4602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:38:44.512599    4602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2
	I1028 04:38:44.522607    4602 main.go:141] libmachine: STDOUT: 
	I1028 04:38:44.522637    4602 main.go:141] libmachine: STDERR: 
	I1028 04:38:44.522697    4602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2 +20000M
	I1028 04:38:44.531086    4602 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:38:44.531104    4602 main.go:141] libmachine: STDERR: 
	I1028 04:38:44.531118    4602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2
	I1028 04:38:44.531123    4602 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:38:44.531132    4602 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:38:44.531180    4602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b7:c8:5b:da:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/offline-docker-770000/disk.qcow2
	I1028 04:38:44.532957    4602 main.go:141] libmachine: STDOUT: 
	I1028 04:38:44.532983    4602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:38:44.532994    4602 client.go:171] duration metric: took 322.39875ms to LocalClient.Create
	I1028 04:38:46.535190    4602 start.go:128] duration metric: took 2.376274416s to createHost
	I1028 04:38:46.535267    4602 start.go:83] releasing machines lock for "offline-docker-770000", held for 2.377059916s
	W1028 04:38:46.535675    4602 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:46.547171    4602 out.go:201] 
	W1028 04:38:46.551321    4602 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:38:46.551476    4602 out.go:270] * 
	* 
	W1028 04:38:46.553536    4602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:38:46.562221    4602 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-770000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-28 04:38:46.575095 -0700 PDT m=+3535.357467917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-770000 -n offline-docker-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-770000 -n offline-docker-770000: exit status 7 (71.735709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-770000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-770000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-770000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestCertOptions (10.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-021000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-021000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.91683325s)

                                                
                                                
-- stdout --
	* [cert-options-021000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-021000" primary control-plane node in "cert-options-021000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-021000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-021000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-021000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.632292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-021000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-021000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-021000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-021000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-021000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-021000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (46.236166ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-021000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-021000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-021000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-021000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-021000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-28 04:39:17.605585 -0700 PDT m=+3566.387788542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-021000 -n cert-options-021000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-021000 -n cert-options-021000: exit status 7 (35.353042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-021000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-021000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-021000
--- FAIL: TestCertOptions (10.20s)

                                                
                                    
x
+
TestCertExpiration (195.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-899000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-899000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.058638625s)

                                                
                                                
-- stdout --
	* [cert-expiration-899000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-899000" primary control-plane node in "cert-expiration-899000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-899000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-899000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-899000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-899000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-899000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.235565667s)

                                                
                                                
-- stdout --
	* [cert-expiration-899000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-899000" primary control-plane node in "cert-expiration-899000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-899000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-899000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-899000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-899000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-899000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-899000" primary control-plane node in "cert-expiration-899000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-899000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-899000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-899000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-28 04:42:17.718033 -0700 PDT m=+3746.505390001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-899000 -n cert-expiration-899000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-899000 -n cert-expiration-899000: exit status 7 (73.200041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-899000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-899000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-899000
--- FAIL: TestCertExpiration (195.45s)

                                                
                                    
x
+
TestDockerFlags (10.09s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-375000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-375000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.843728417s)

                                                
                                                
-- stdout --
	* [docker-flags-375000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-375000" primary control-plane node in "docker-flags-375000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-375000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:38:57.466978    4792 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:38:57.467120    4792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:57.467123    4792 out.go:358] Setting ErrFile to fd 2...
	I1028 04:38:57.467126    4792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:57.467260    4792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:38:57.468426    4792 out.go:352] Setting JSON to false
	I1028 04:38:57.486104    4792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4108,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:38:57.486174    4792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:38:57.492421    4792 out.go:177] * [docker-flags-375000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:38:57.500402    4792 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:38:57.500459    4792 notify.go:220] Checking for updates...
	I1028 04:38:57.507420    4792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:38:57.510375    4792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:38:57.514397    4792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:38:57.517384    4792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:38:57.520390    4792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:38:57.523702    4792 config.go:182] Loaded profile config "force-systemd-flag-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:38:57.523774    4792 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:38:57.523828    4792 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:38:57.528420    4792 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:38:57.540389    4792 start.go:297] selected driver: qemu2
	I1028 04:38:57.540395    4792 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:38:57.540402    4792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:38:57.542982    4792 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:38:57.546378    4792 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:38:57.549506    4792 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1028 04:38:57.549536    4792 cni.go:84] Creating CNI manager for ""
	I1028 04:38:57.549562    4792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:38:57.549567    4792 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:38:57.549604    4792 start.go:340] cluster config:
	{Name:docker-flags-375000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-375000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:38:57.554574    4792 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:57.562352    4792 out.go:177] * Starting "docker-flags-375000" primary control-plane node in "docker-flags-375000" cluster
	I1028 04:38:57.566350    4792 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:38:57.566367    4792 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:38:57.566379    4792 cache.go:56] Caching tarball of preloaded images
	I1028 04:38:57.566476    4792 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:38:57.566482    4792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:38:57.566550    4792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/docker-flags-375000/config.json ...
	I1028 04:38:57.566561    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/docker-flags-375000/config.json: {Name:mkdaf6ae95a536ff3eb3a94410920f4d350caad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:38:57.566911    4792 start.go:360] acquireMachinesLock for docker-flags-375000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:57.566961    4792 start.go:364] duration metric: took 42.042µs to acquireMachinesLock for "docker-flags-375000"
	I1028 04:38:57.566972    4792 start.go:93] Provisioning new machine with config: &{Name:docker-flags-375000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-375000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:57.567007    4792 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:57.570377    4792 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 04:38:57.587556    4792 start.go:159] libmachine.API.Create for "docker-flags-375000" (driver="qemu2")
	I1028 04:38:57.587595    4792 client.go:168] LocalClient.Create starting
	I1028 04:38:57.587672    4792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:57.587706    4792 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:57.587722    4792 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:57.587762    4792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:57.587790    4792 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:57.587796    4792 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:57.588238    4792 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:38:57.743921    4792 main.go:141] libmachine: Creating SSH key...
	I1028 04:38:57.780966    4792 main.go:141] libmachine: Creating Disk image...
	I1028 04:38:57.780971    4792 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:38:57.781160    4792 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2
	I1028 04:38:57.791097    4792 main.go:141] libmachine: STDOUT: 
	I1028 04:38:57.791116    4792 main.go:141] libmachine: STDERR: 
	I1028 04:38:57.791199    4792 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2 +20000M
	I1028 04:38:57.799627    4792 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:38:57.799643    4792 main.go:141] libmachine: STDERR: 
	I1028 04:38:57.799665    4792 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2
	I1028 04:38:57.799669    4792 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:38:57.799680    4792 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:38:57.799707    4792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:95:6c:eb:e2:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2
	I1028 04:38:57.801519    4792 main.go:141] libmachine: STDOUT: 
	I1028 04:38:57.801540    4792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:38:57.801560    4792 client.go:171] duration metric: took 213.957917ms to LocalClient.Create
	I1028 04:38:59.803741    4792 start.go:128] duration metric: took 2.236707833s to createHost
	I1028 04:38:59.803774    4792 start.go:83] releasing machines lock for "docker-flags-375000", held for 2.236791291s
	W1028 04:38:59.803818    4792 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:59.828036    4792 out.go:177] * Deleting "docker-flags-375000" in qemu2 ...
	W1028 04:38:59.850858    4792 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:59.850883    4792 start.go:729] Will try again in 5 seconds ...
	I1028 04:39:04.853049    4792 start.go:360] acquireMachinesLock for docker-flags-375000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:39:04.853320    4792 start.go:364] duration metric: took 214.042µs to acquireMachinesLock for "docker-flags-375000"
	I1028 04:39:04.853384    4792 start.go:93] Provisioning new machine with config: &{Name:docker-flags-375000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-375000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:39:04.853580    4792 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:39:04.866515    4792 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 04:39:04.904734    4792 start.go:159] libmachine.API.Create for "docker-flags-375000" (driver="qemu2")
	I1028 04:39:04.904788    4792 client.go:168] LocalClient.Create starting
	I1028 04:39:04.904912    4792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:39:04.905001    4792 main.go:141] libmachine: Decoding PEM data...
	I1028 04:39:04.905020    4792 main.go:141] libmachine: Parsing certificate...
	I1028 04:39:04.905090    4792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:39:04.905140    4792 main.go:141] libmachine: Decoding PEM data...
	I1028 04:39:04.905152    4792 main.go:141] libmachine: Parsing certificate...
	I1028 04:39:04.906050    4792 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:39:05.078071    4792 main.go:141] libmachine: Creating SSH key...
	I1028 04:39:05.208119    4792 main.go:141] libmachine: Creating Disk image...
	I1028 04:39:05.208126    4792 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:39:05.208328    4792 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2
	I1028 04:39:05.218364    4792 main.go:141] libmachine: STDOUT: 
	I1028 04:39:05.218381    4792 main.go:141] libmachine: STDERR: 
	I1028 04:39:05.218432    4792 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2 +20000M
	I1028 04:39:05.226933    4792 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:39:05.226947    4792 main.go:141] libmachine: STDERR: 
	I1028 04:39:05.226959    4792 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2
	I1028 04:39:05.226965    4792 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:39:05.226976    4792 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:39:05.227008    4792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:48:e0:30:c2:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/docker-flags-375000/disk.qcow2
	I1028 04:39:05.228831    4792 main.go:141] libmachine: STDOUT: 
	I1028 04:39:05.228844    4792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:39:05.228855    4792 client.go:171] duration metric: took 324.060583ms to LocalClient.Create
	I1028 04:39:07.231035    4792 start.go:128] duration metric: took 2.377408833s to createHost
	I1028 04:39:07.231092    4792 start.go:83] releasing machines lock for "docker-flags-375000", held for 2.377742625s
	W1028 04:39:07.231505    4792 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-375000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-375000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:39:07.246148    4792 out.go:201] 
	W1028 04:39:07.250291    4792 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:39:07.250314    4792 out.go:270] * 
	* 
	W1028 04:39:07.253254    4792 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:39:07.264105    4792 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-375000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-375000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-375000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (81.404333ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-375000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-375000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-375000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-375000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-375000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-375000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-375000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-375000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-375000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.883583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-375000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-375000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-375000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-375000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-375000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-375000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-28 04:39:07.411188 -0700 PDT m=+3556.193447626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-375000 -n docker-flags-375000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-375000 -n docker-flags-375000: exit status 7 (33.59375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-375000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-375000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-375000
--- FAIL: TestDockerFlags (10.09s)

                                                
                                    
x
+
TestForceSystemdFlag (10.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-446000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-446000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.912527167s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-446000" primary control-plane node in "force-systemd-flag-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:38:52.336270    4769 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:38:52.336434    4769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:52.336438    4769 out.go:358] Setting ErrFile to fd 2...
	I1028 04:38:52.336440    4769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:52.336555    4769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:38:52.337735    4769 out.go:352] Setting JSON to false
	I1028 04:38:52.355227    4769 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4103,"bootTime":1730111429,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:38:52.355302    4769 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:38:52.360197    4769 out.go:177] * [force-systemd-flag-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:38:52.375652    4769 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:38:52.375708    4769 notify.go:220] Checking for updates...
	I1028 04:38:52.385628    4769 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:38:52.389604    4769 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:38:52.392634    4769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:38:52.395677    4769 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:38:52.398591    4769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:38:52.401940    4769 config.go:182] Loaded profile config "force-systemd-env-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:38:52.402025    4769 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:38:52.402074    4769 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:38:52.406615    4769 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:38:52.413597    4769 start.go:297] selected driver: qemu2
	I1028 04:38:52.413603    4769 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:38:52.413609    4769 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:38:52.416343    4769 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:38:52.419621    4769 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:38:52.421010    4769 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 04:38:52.421032    4769 cni.go:84] Creating CNI manager for ""
	I1028 04:38:52.421066    4769 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:38:52.421073    4769 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:38:52.421108    4769 start.go:340] cluster config:
	{Name:force-systemd-flag-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:38:52.425992    4769 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:52.433630    4769 out.go:177] * Starting "force-systemd-flag-446000" primary control-plane node in "force-systemd-flag-446000" cluster
	I1028 04:38:52.437623    4769 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:38:52.437640    4769 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:38:52.437649    4769 cache.go:56] Caching tarball of preloaded images
	I1028 04:38:52.437743    4769 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:38:52.437750    4769 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:38:52.437823    4769 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/force-systemd-flag-446000/config.json ...
	I1028 04:38:52.437836    4769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/force-systemd-flag-446000/config.json: {Name:mk55632a59e5d2810246f2b2f9caa54efecf701f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:38:52.438305    4769 start.go:360] acquireMachinesLock for force-systemd-flag-446000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:52.438363    4769 start.go:364] duration metric: took 49µs to acquireMachinesLock for "force-systemd-flag-446000"
	I1028 04:38:52.438382    4769 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:52.438408    4769 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:52.442678    4769 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 04:38:52.460833    4769 start.go:159] libmachine.API.Create for "force-systemd-flag-446000" (driver="qemu2")
	I1028 04:38:52.460863    4769 client.go:168] LocalClient.Create starting
	I1028 04:38:52.460936    4769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:52.460977    4769 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:52.460989    4769 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:52.461030    4769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:52.461061    4769 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:52.461074    4769 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:52.461430    4769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:38:52.617553    4769 main.go:141] libmachine: Creating SSH key...
	I1028 04:38:52.714966    4769 main.go:141] libmachine: Creating Disk image...
	I1028 04:38:52.714972    4769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:38:52.715162    4769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I1028 04:38:52.725209    4769 main.go:141] libmachine: STDOUT: 
	I1028 04:38:52.725228    4769 main.go:141] libmachine: STDERR: 
	I1028 04:38:52.725288    4769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2 +20000M
	I1028 04:38:52.733780    4769 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:38:52.733796    4769 main.go:141] libmachine: STDERR: 
	I1028 04:38:52.733817    4769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I1028 04:38:52.733823    4769 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:38:52.733834    4769 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:38:52.733865    4769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e0:55:83:bf:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I1028 04:38:52.735686    4769 main.go:141] libmachine: STDOUT: 
	I1028 04:38:52.735701    4769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:38:52.735721    4769 client.go:171] duration metric: took 274.850917ms to LocalClient.Create
	I1028 04:38:54.737901    4769 start.go:128] duration metric: took 2.299458875s to createHost
	I1028 04:38:54.738010    4769 start.go:83] releasing machines lock for "force-systemd-flag-446000", held for 2.299622042s
	W1028 04:38:54.738074    4769 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:54.764187    4769 out.go:177] * Deleting "force-systemd-flag-446000" in qemu2 ...
	W1028 04:38:54.785368    4769 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:54.785382    4769 start.go:729] Will try again in 5 seconds ...
	I1028 04:38:59.787565    4769 start.go:360] acquireMachinesLock for force-systemd-flag-446000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:59.803869    4769 start.go:364] duration metric: took 16.216792ms to acquireMachinesLock for "force-systemd-flag-446000"
	I1028 04:38:59.804047    4769 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:59.804320    4769 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:59.817913    4769 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 04:38:59.865328    4769 start.go:159] libmachine.API.Create for "force-systemd-flag-446000" (driver="qemu2")
	I1028 04:38:59.865376    4769 client.go:168] LocalClient.Create starting
	I1028 04:38:59.865545    4769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:59.865642    4769 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:59.865658    4769 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:59.865731    4769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:59.865796    4769 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:59.865816    4769 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:59.866545    4769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:39:00.040453    4769 main.go:141] libmachine: Creating SSH key...
	I1028 04:39:00.138826    4769 main.go:141] libmachine: Creating Disk image...
	I1028 04:39:00.138836    4769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:39:00.139026    4769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I1028 04:39:00.149126    4769 main.go:141] libmachine: STDOUT: 
	I1028 04:39:00.149144    4769 main.go:141] libmachine: STDERR: 
	I1028 04:39:00.149214    4769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2 +20000M
	I1028 04:39:00.157584    4769 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:39:00.157600    4769 main.go:141] libmachine: STDERR: 
	I1028 04:39:00.157613    4769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I1028 04:39:00.157618    4769 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:39:00.157629    4769 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:39:00.157663    4769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:ac:1c:ce:21:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I1028 04:39:00.159448    4769 main.go:141] libmachine: STDOUT: 
	I1028 04:39:00.159469    4769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:39:00.159482    4769 client.go:171] duration metric: took 294.098666ms to LocalClient.Create
	I1028 04:39:02.161710    4769 start.go:128] duration metric: took 2.357336458s to createHost
	I1028 04:39:02.161804    4769 start.go:83] releasing machines lock for "force-systemd-flag-446000", held for 2.357893833s
	W1028 04:39:02.162252    4769 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:39:02.176969    4769 out.go:201] 
	W1028 04:39:02.188194    4769 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:39:02.188227    4769 out.go:270] * 
	* 
	W1028 04:39:02.190171    4769 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:39:02.202914    4769 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-446000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-446000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-446000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (86.795917ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-446000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-446000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-446000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-28 04:39:02.306948 -0700 PDT m=+3551.089234917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-446000 -n force-systemd-flag-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-446000 -n force-systemd-flag-446000: exit status 7 (37.59825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-446000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-446000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-446000
--- FAIL: TestForceSystemdFlag (10.12s)

                                                
                                    
x
+
TestForceSystemdEnv (10.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-759000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1028 04:38:47.225926    1598 install.go:79] stdout: 
W1028 04:38:47.226053    1598 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1028 04:38:47.226071    1598 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit]
I1028 04:38:47.238141    1598 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit]
I1028 04:38:47.249438    1598 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit]
I1028 04:38:47.261468    1598 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit]
I1028 04:38:47.283104    1598 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 04:38:47.283247    1598 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1028 04:38:49.084687    1598 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1028 04:38:49.084710    1598 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1028 04:38:49.084753    1598 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1028 04:38:49.084790    1598 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit
I1028 04:38:49.475974    1598 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10979a6e0 0x10979a6e0 0x10979a6e0 0x10979a6e0 0x10979a6e0 0x10979a6e0 0x10979a6e0] Decompressors:map[bz2:0x14000543430 gz:0x14000543438 tar:0x140005433e0 tar.bz2:0x140005433f0 tar.gz:0x14000543400 tar.xz:0x14000543410 tar.zst:0x14000543420 tbz2:0x140005433f0 tgz:0x14000543400 txz:0x14000543410 tzst:0x14000543420 xz:0x14000543440 zip:0x14000543450 zst:0x14000543448] Getters:map[file:0x14001520300 http:0x140005407d0 https:0x14000540820] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1028 04:38:49.476096    1598 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit
I1028 04:38:52.252114    1598 install.go:79] stdout: 
W1028 04:38:52.252350    1598 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1028 04:38:52.252375    1598 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit]
I1028 04:38:52.269118    1598 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit]
I1028 04:38:52.282229    1598 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit]
I1028 04:38:52.292902    1598 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-759000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.502122125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-759000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-759000" primary control-plane node in "force-systemd-env-759000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:38:46.763357    4737 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:38:46.763511    4737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:46.763515    4737 out.go:358] Setting ErrFile to fd 2...
	I1028 04:38:46.763517    4737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:46.763656    4737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:38:46.764791    4737 out.go:352] Setting JSON to false
	I1028 04:38:46.783142    4737 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4097,"bootTime":1730111429,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:38:46.783216    4737 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:38:46.788731    4737 out.go:177] * [force-systemd-env-759000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:38:46.797000    4737 notify.go:220] Checking for updates...
	I1028 04:38:46.800871    4737 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:38:46.808870    4737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:38:46.816892    4737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:38:46.824834    4737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:38:46.832711    4737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:38:46.840848    4737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1028 04:38:46.844264    4737 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:38:46.844313    4737 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:38:46.848842    4737 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:38:46.855888    4737 start.go:297] selected driver: qemu2
	I1028 04:38:46.855894    4737 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:38:46.855902    4737 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:38:46.858510    4737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:38:46.861878    4737 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:38:46.865970    4737 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 04:38:46.865987    4737 cni.go:84] Creating CNI manager for ""
	I1028 04:38:46.866009    4737 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:38:46.866020    4737 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:38:46.866052    4737 start.go:340] cluster config:
	{Name:force-systemd-env-759000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:38:46.870584    4737 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:46.874844    4737 out.go:177] * Starting "force-systemd-env-759000" primary control-plane node in "force-systemd-env-759000" cluster
	I1028 04:38:46.882847    4737 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:38:46.882863    4737 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:38:46.882871    4737 cache.go:56] Caching tarball of preloaded images
	I1028 04:38:46.882960    4737 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:38:46.882966    4737 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:38:46.883026    4737 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/force-systemd-env-759000/config.json ...
	I1028 04:38:46.883038    4737 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/force-systemd-env-759000/config.json: {Name:mk4625b7a42c0043a8b971e77c40352955505529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:38:46.883275    4737 start.go:360] acquireMachinesLock for force-systemd-env-759000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:46.883323    4737 start.go:364] duration metric: took 42.125µs to acquireMachinesLock for "force-systemd-env-759000"
	I1028 04:38:46.883336    4737 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:46.883360    4737 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:46.889846    4737 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 04:38:46.905998    4737 start.go:159] libmachine.API.Create for "force-systemd-env-759000" (driver="qemu2")
	I1028 04:38:46.906025    4737 client.go:168] LocalClient.Create starting
	I1028 04:38:46.906097    4737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:46.906132    4737 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:46.906143    4737 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:46.906180    4737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:46.906210    4737 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:46.906228    4737 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:46.906574    4737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:38:47.065183    4737 main.go:141] libmachine: Creating SSH key...
	I1028 04:38:47.265657    4737 main.go:141] libmachine: Creating Disk image...
	I1028 04:38:47.265668    4737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:38:47.265870    4737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2
	I1028 04:38:47.275776    4737 main.go:141] libmachine: STDOUT: 
	I1028 04:38:47.275811    4737 main.go:141] libmachine: STDERR: 
	I1028 04:38:47.275895    4737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2 +20000M
	I1028 04:38:47.285152    4737 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:38:47.285171    4737 main.go:141] libmachine: STDERR: 
	I1028 04:38:47.285190    4737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2
	I1028 04:38:47.285197    4737 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:38:47.285212    4737 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:38:47.285249    4737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ca:b1:f1:5a:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2
	I1028 04:38:47.287093    4737 main.go:141] libmachine: STDOUT: 
	I1028 04:38:47.287105    4737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:38:47.287128    4737 client.go:171] duration metric: took 381.09575ms to LocalClient.Create
	I1028 04:38:49.289376    4737 start.go:128] duration metric: took 2.405965417s to createHost
	I1028 04:38:49.289455    4737 start.go:83] releasing machines lock for "force-systemd-env-759000", held for 2.406105833s
	W1028 04:38:49.289656    4737 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:49.308901    4737 out.go:177] * Deleting "force-systemd-env-759000" in qemu2 ...
	W1028 04:38:49.334911    4737 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:49.334939    4737 start.go:729] Will try again in 5 seconds ...
	I1028 04:38:54.337279    4737 start.go:360] acquireMachinesLock for force-systemd-env-759000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:54.738170    4737 start.go:364] duration metric: took 400.773791ms to acquireMachinesLock for "force-systemd-env-759000"
	I1028 04:38:54.738305    4737 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:54.738576    4737 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:54.747319    4737 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 04:38:54.794048    4737 start.go:159] libmachine.API.Create for "force-systemd-env-759000" (driver="qemu2")
	I1028 04:38:54.794091    4737 client.go:168] LocalClient.Create starting
	I1028 04:38:54.794239    4737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:54.794305    4737 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:54.794322    4737 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:54.794390    4737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:54.794447    4737 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:54.794462    4737 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:54.795095    4737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:38:54.970097    4737 main.go:141] libmachine: Creating SSH key...
	I1028 04:38:55.158802    4737 main.go:141] libmachine: Creating Disk image...
	I1028 04:38:55.158812    4737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:38:55.159006    4737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2
	I1028 04:38:55.169267    4737 main.go:141] libmachine: STDOUT: 
	I1028 04:38:55.169299    4737 main.go:141] libmachine: STDERR: 
	I1028 04:38:55.169367    4737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2 +20000M
	I1028 04:38:55.177881    4737 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:38:55.177894    4737 main.go:141] libmachine: STDERR: 
	I1028 04:38:55.177910    4737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2
	I1028 04:38:55.177920    4737 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:38:55.177928    4737 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:38:55.177977    4737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:6f:35:a4:4a:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/force-systemd-env-759000/disk.qcow2
	I1028 04:38:55.179812    4737 main.go:141] libmachine: STDOUT: 
	I1028 04:38:55.179825    4737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:38:55.179845    4737 client.go:171] duration metric: took 385.746417ms to LocalClient.Create
	I1028 04:38:57.182254    4737 start.go:128] duration metric: took 2.443572667s to createHost
	I1028 04:38:57.182388    4737 start.go:83] releasing machines lock for "force-systemd-env-759000", held for 2.444173542s
	W1028 04:38:57.182861    4737 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:57.197606    4737 out.go:201] 
	W1028 04:38:57.205479    4737 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:38:57.205521    4737 out.go:270] * 
	* 
	W1028 04:38:57.208442    4737 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:38:57.216249    4737 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-759000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-759000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-759000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (83.829542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-759000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-759000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-28 04:38:57.318022 -0700 PDT m=+3546.100336876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-759000 -n force-systemd-env-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-759000 -n force-systemd-env-759000: exit status 7 (37.145291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-759000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-759000
--- FAIL: TestForceSystemdEnv (10.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-940000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:833: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.105.4 PodIP:192.168.105.4 StartTime:2024-10-28 03:49:56 -0700 PDT ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0x14000c42330 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0x14001dc2f50} Ready:false RestartCount:2 Image:registry.k8s.io/kube-controller-manager:v1.31.2 ImageID:docker-pullable://registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752 ContainerID:docker://e1379d9874bdb4eefa4995b04cefc6ae21864f2f7271cf923d9a10157d0b27c8}]}
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-940000 -n functional-940000
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 logs -n 25
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-196000 --log_dir                                                  | nospam-196000     | jenkins | v1.34.0 | 28 Oct 24 03:46 PDT | 28 Oct 24 03:46 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| unpause | nospam-196000 --log_dir                                                  | nospam-196000     | jenkins | v1.34.0 | 28 Oct 24 03:46 PDT | 28 Oct 24 03:46 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| unpause | nospam-196000 --log_dir                                                  | nospam-196000     | jenkins | v1.34.0 | 28 Oct 24 03:46 PDT | 28 Oct 24 03:46 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| stop    | nospam-196000 --log_dir                                                  | nospam-196000     | jenkins | v1.34.0 | 28 Oct 24 03:46 PDT | 28 Oct 24 03:46 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| stop    | nospam-196000 --log_dir                                                  | nospam-196000     | jenkins | v1.34.0 | 28 Oct 24 03:46 PDT | 28 Oct 24 03:47 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| stop    | nospam-196000 --log_dir                                                  | nospam-196000     | jenkins | v1.34.0 | 28 Oct 24 03:47 PDT | 28 Oct 24 03:47 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| delete  | -p nospam-196000                                                         | nospam-196000     | jenkins | v1.34.0 | 28 Oct 24 03:47 PDT | 28 Oct 24 03:47 PDT |
	| start   | -p functional-940000                                                     | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:47 PDT | 28 Oct 24 03:48 PDT |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                   |         |         |                     |                     |
	| start   | -p functional-940000                                                     | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:48 PDT | 28 Oct 24 03:49 PDT |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-940000 cache add                                              | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-940000 cache add                                              | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-940000 cache add                                              | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-940000 cache add                                              | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | minikube-local-cache-test:functional-940000                              |                   |         |         |                     |                     |
	| cache   | functional-940000 cache delete                                           | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | minikube-local-cache-test:functional-940000                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	| ssh     | functional-940000 ssh sudo                                               | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-940000                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | ssh sudo docker rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh                                                    | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-940000 cache reload                                           | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	| ssh     | functional-940000 ssh                                                    | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-940000 kubectl --                                             | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | --context functional-940000                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-940000                                                     | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:49 PDT | 28 Oct 24 03:49 PDT |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 03:49:21
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 03:49:21.169123    2088 out.go:345] Setting OutFile to fd 1 ...
	I1028 03:49:21.169275    2088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:49:21.169277    2088 out.go:358] Setting ErrFile to fd 2...
	I1028 03:49:21.169279    2088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:49:21.169406    2088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 03:49:21.170543    2088 out.go:352] Setting JSON to false
	I1028 03:49:21.188340    2088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1132,"bootTime":1730111429,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 03:49:21.188413    2088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 03:49:21.193299    2088 out.go:177] * [functional-940000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 03:49:21.201291    2088 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 03:49:21.201352    2088 notify.go:220] Checking for updates...
	I1028 03:49:21.208238    2088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:49:21.211305    2088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 03:49:21.214277    2088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 03:49:21.217260    2088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 03:49:21.220263    2088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 03:49:21.223599    2088 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 03:49:21.223652    2088 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 03:49:21.227221    2088 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 03:49:21.234302    2088 start.go:297] selected driver: qemu2
	I1028 03:49:21.234307    2088 start.go:901] validating driver "qemu2" against &{Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:49:21.234365    2088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 03:49:21.236852    2088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 03:49:21.236874    2088 cni.go:84] Creating CNI manager for ""
	I1028 03:49:21.236898    2088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 03:49:21.236939    2088 start.go:340] cluster config:
	{Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-940000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:49:21.241346    2088 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 03:49:21.249244    2088 out.go:177] * Starting "functional-940000" primary control-plane node in "functional-940000" cluster
	I1028 03:49:21.253330    2088 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 03:49:21.253345    2088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 03:49:21.253351    2088 cache.go:56] Caching tarball of preloaded images
	I1028 03:49:21.253422    2088 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 03:49:21.253428    2088 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 03:49:21.253491    2088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/config.json ...
	I1028 03:49:21.253897    2088 start.go:360] acquireMachinesLock for functional-940000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 03:49:21.253942    2088 start.go:364] duration metric: took 40.583µs to acquireMachinesLock for "functional-940000"
	I1028 03:49:21.253949    2088 start.go:96] Skipping create...Using existing machine configuration
	I1028 03:49:21.253951    2088 fix.go:54] fixHost starting: 
	I1028 03:49:21.254548    2088 fix.go:112] recreateIfNeeded on functional-940000: state=Running err=<nil>
	W1028 03:49:21.254554    2088 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 03:49:21.263286    2088 out.go:177] * Updating the running qemu2 "functional-940000" VM ...
	I1028 03:49:21.267282    2088 machine.go:93] provisionDockerMachine start ...
	I1028 03:49:21.267334    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.267486    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.267489    2088 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 03:49:21.309800    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-940000
	
	I1028 03:49:21.309809    2088 buildroot.go:166] provisioning hostname "functional-940000"
	I1028 03:49:21.309851    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.309964    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.309968    2088 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-940000 && echo "functional-940000" | sudo tee /etc/hostname
	I1028 03:49:21.352660    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-940000
	
	I1028 03:49:21.352708    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.352813    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.352819    2088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-940000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-940000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-940000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 03:49:21.392478    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 03:49:21.392485    2088 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19876-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19876-1087/.minikube}
	I1028 03:49:21.392494    2088 buildroot.go:174] setting up certificates
	I1028 03:49:21.392498    2088 provision.go:84] configureAuth start
	I1028 03:49:21.392504    2088 provision.go:143] copyHostCerts
	I1028 03:49:21.392573    2088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem, removing ...
	I1028 03:49:21.392577    2088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem
	I1028 03:49:21.392817    2088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem (1679 bytes)
	I1028 03:49:21.393030    2088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem, removing ...
	I1028 03:49:21.393033    2088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem
	I1028 03:49:21.393088    2088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem (1078 bytes)
	I1028 03:49:21.393208    2088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem, removing ...
	I1028 03:49:21.393210    2088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem
	I1028 03:49:21.393258    2088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem (1123 bytes)
	I1028 03:49:21.393350    2088 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem org=jenkins.functional-940000 san=[127.0.0.1 192.168.105.4 functional-940000 localhost minikube]
	I1028 03:49:21.514706    2088 provision.go:177] copyRemoteCerts
	I1028 03:49:21.514754    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 03:49:21.514760    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:21.537083    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 03:49:21.545399    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 03:49:21.553874    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 03:49:21.562772    2088 provision.go:87] duration metric: took 170.266542ms to configureAuth
	I1028 03:49:21.562778    2088 buildroot.go:189] setting minikube options for container-runtime
	I1028 03:49:21.562908    2088 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 03:49:21.562953    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.563043    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.563046    2088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 03:49:21.602873    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 03:49:21.602878    2088 buildroot.go:70] root file system type: tmpfs
	I1028 03:49:21.602923    2088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 03:49:21.602987    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.603083    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.603114    2088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 03:49:21.645992    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 03:49:21.646050    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.646170    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.646176    2088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 03:49:21.687914    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 03:49:21.687920    2088 machine.go:96] duration metric: took 420.632709ms to provisionDockerMachine
	I1028 03:49:21.687924    2088 start.go:293] postStartSetup for "functional-940000" (driver="qemu2")
	I1028 03:49:21.687930    2088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 03:49:21.687977    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 03:49:21.687984    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:21.709763    2088 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 03:49:21.711199    2088 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 03:49:21.711204    2088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/addons for local assets ...
	I1028 03:49:21.711291    2088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/files for local assets ...
	I1028 03:49:21.711427    2088 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem -> 15982.pem in /etc/ssl/certs
	I1028 03:49:21.711570    2088 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/test/nested/copy/1598/hosts -> hosts in /etc/test/nested/copy/1598
	I1028 03:49:21.711620    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1598
	I1028 03:49:21.715579    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /etc/ssl/certs/15982.pem (1708 bytes)
	I1028 03:49:21.723711    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/test/nested/copy/1598/hosts --> /etc/test/nested/copy/1598/hosts (40 bytes)
	I1028 03:49:21.732406    2088 start.go:296] duration metric: took 44.477ms for postStartSetup
	I1028 03:49:21.732418    2088 fix.go:56] duration metric: took 478.465542ms for fixHost
	I1028 03:49:21.732465    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.732569    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.732572    2088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 03:49:21.770236    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730112561.819452308
	
	I1028 03:49:21.770241    2088 fix.go:216] guest clock: 1730112561.819452308
	I1028 03:49:21.770244    2088 fix.go:229] Guest: 2024-10-28 03:49:21.819452308 -0700 PDT Remote: 2024-10-28 03:49:21.732419 -0700 PDT m=+0.584825959 (delta=87.033308ms)
	I1028 03:49:21.770253    2088 fix.go:200] guest clock delta is within tolerance: 87.033308ms
	I1028 03:49:21.770255    2088 start.go:83] releasing machines lock for "functional-940000", held for 516.308792ms
	I1028 03:49:21.770596    2088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 03:49:21.770596    2088 ssh_runner.go:195] Run: cat /version.json
	I1028 03:49:21.770603    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:21.770610    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:21.792035    2088 ssh_runner.go:195] Run: systemctl --version
	I1028 03:49:21.837060    2088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 03:49:21.838910    2088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 03:49:21.838937    2088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 03:49:21.842274    2088 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 03:49:21.842278    2088 start.go:495] detecting cgroup driver to use...
	I1028 03:49:21.842351    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 03:49:21.848854    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 03:49:21.852760    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 03:49:21.856607    2088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 03:49:21.856629    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 03:49:21.860223    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 03:49:21.864269    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 03:49:21.868372    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 03:49:21.872274    2088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 03:49:21.876543    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 03:49:21.880265    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 03:49:21.884380    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 03:49:21.888507    2088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 03:49:21.892403    2088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 03:49:21.896421    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:21.990133    2088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 03:49:22.001103    2088 start.go:495] detecting cgroup driver to use...
	I1028 03:49:22.001170    2088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 03:49:22.008119    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 03:49:22.013701    2088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 03:49:22.020392    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 03:49:22.026328    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 03:49:22.031732    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 03:49:22.038748    2088 ssh_runner.go:195] Run: which cri-dockerd
	I1028 03:49:22.040271    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 03:49:22.043765    2088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 03:49:22.050435    2088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 03:49:22.148379    2088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 03:49:22.243898    2088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 03:49:22.243953    2088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 03:49:22.250441    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:22.353701    2088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 03:49:34.690473    2088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.336715542s)
	I1028 03:49:34.690551    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 03:49:34.697162    2088 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1028 03:49:34.704938    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 03:49:34.710806    2088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 03:49:34.798582    2088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 03:49:34.891040    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:34.975539    2088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 03:49:34.983045    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 03:49:34.988898    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:35.065727    2088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 03:49:35.096755    2088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 03:49:35.096842    2088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 03:49:35.099201    2088 start.go:563] Will wait 60s for crictl version
	I1028 03:49:35.099250    2088 ssh_runner.go:195] Run: which crictl
	I1028 03:49:35.100714    2088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 03:49:35.112329    2088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 03:49:35.112420    2088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 03:49:35.120188    2088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 03:49:35.137784    2088 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 03:49:35.137942    2088 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1028 03:49:35.143771    2088 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1028 03:49:35.147763    2088 kubeadm.go:883] updating cluster {Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 03:49:35.147830    2088 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 03:49:35.147897    2088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 03:49:35.154155    2088 docker.go:689] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-940000
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1028 03:49:35.154160    2088 docker.go:619] Images already preloaded, skipping extraction
	I1028 03:49:35.154215    2088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 03:49:35.159438    2088 docker.go:689] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-940000
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1028 03:49:35.159443    2088 cache_images.go:84] Images are preloaded, skipping loading
	I1028 03:49:35.159447    2088 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.2 docker true true} ...
	I1028 03:49:35.159499    2088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-940000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 03:49:35.159550    2088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 03:49:35.177915    2088 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1028 03:49:35.177924    2088 cni.go:84] Creating CNI manager for ""
	I1028 03:49:35.177932    2088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 03:49:35.177938    2088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 03:49:35.177947    2088 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-940000 NodeName:functional-940000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 03:49:35.178001    2088 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-940000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.105.4"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 03:49:35.178070    2088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 03:49:35.181696    2088 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 03:49:35.181737    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 03:49:35.184948    2088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 03:49:35.190956    2088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 03:49:35.196812    2088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1028 03:49:35.202838    2088 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I1028 03:49:35.204142    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:35.293823    2088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 03:49:35.300017    2088 certs.go:68] Setting up /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000 for IP: 192.168.105.4
	I1028 03:49:35.300023    2088 certs.go:194] generating shared ca certs ...
	I1028 03:49:35.300030    2088 certs.go:226] acquiring lock for ca certs: {Name:mk8f0a455373409f6ac5dde02ca67c613058d85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:49:35.300205    2088 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key
	I1028 03:49:35.300266    2088 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key
	I1028 03:49:35.300275    2088 certs.go:256] generating profile certs ...
	I1028 03:49:35.300359    2088 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.key
	I1028 03:49:35.300427    2088 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/apiserver.key.443fd431
	I1028 03:49:35.300489    2088 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/proxy-client.key
	I1028 03:49:35.300662    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem (1338 bytes)
	W1028 03:49:35.300699    2088 certs.go:480] ignoring /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598_empty.pem, impossibly tiny 0 bytes
	I1028 03:49:35.300703    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 03:49:35.300735    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem (1078 bytes)
	I1028 03:49:35.300765    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem (1123 bytes)
	I1028 03:49:35.300797    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem (1679 bytes)
	I1028 03:49:35.300859    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem (1708 bytes)
	I1028 03:49:35.301204    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 03:49:35.309739    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 03:49:35.318070    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 03:49:35.326598    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 03:49:35.335291    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 03:49:35.343909    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 03:49:35.352563    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 03:49:35.360669    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 03:49:35.368786    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /usr/share/ca-certificates/15982.pem (1708 bytes)
	I1028 03:49:35.377043    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 03:49:35.385343    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem --> /usr/share/ca-certificates/1598.pem (1338 bytes)
	I1028 03:49:35.393942    2088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 03:49:35.399743    2088 ssh_runner.go:195] Run: openssl version
	I1028 03:49:35.402009    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15982.pem && ln -fs /usr/share/ca-certificates/15982.pem /etc/ssl/certs/15982.pem"
	I1028 03:49:35.405868    2088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15982.pem
	I1028 03:49:35.407541    2088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:47 /usr/share/ca-certificates/15982.pem
	I1028 03:49:35.407567    2088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15982.pem
	I1028 03:49:35.409820    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15982.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 03:49:35.413289    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 03:49:35.416907    2088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 03:49:35.418454    2088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:40 /usr/share/ca-certificates/minikubeCA.pem
	I1028 03:49:35.418473    2088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 03:49:35.420554    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 03:49:35.424200    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1598.pem && ln -fs /usr/share/ca-certificates/1598.pem /etc/ssl/certs/1598.pem"
	I1028 03:49:35.428253    2088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1598.pem
	I1028 03:49:35.430011    2088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:47 /usr/share/ca-certificates/1598.pem
	I1028 03:49:35.430048    2088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1598.pem
	I1028 03:49:35.432060    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1598.pem /etc/ssl/certs/51391683.0"
	I1028 03:49:35.435823    2088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 03:49:35.437441    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 03:49:35.439374    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 03:49:35.441350    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 03:49:35.443258    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 03:49:35.445311    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 03:49:35.447296    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 03:49:35.449296    2088 kubeadm.go:392] StartCluster: {Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:49:35.449371    2088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 03:49:35.455398    2088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 03:49:35.459251    2088 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 03:49:35.459257    2088 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 03:49:35.459286    2088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 03:49:35.462865    2088 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 03:49:35.463226    2088 kubeconfig.go:125] found "functional-940000" server: "https://192.168.105.4:8441"
	I1028 03:49:35.464170    2088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 03:49:35.468255    2088 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1028 03:49:35.468259    2088 kubeadm.go:1160] stopping kube-system containers ...
	I1028 03:49:35.468326    2088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 03:49:35.475703    2088 docker.go:483] Stopping containers: [39fd8deedf87 eb0e5a6bd50d effd798b9372 a3a799cc5c81 590b9d6083e7 ed17182b545c 15338a6f8b02 5390d2837e3c 808a48de8b00 cdc09f4247ba 40c71e637bf7 928a83498715 4917fb563755 758d8fd91a40 14e0ef549e95 c50434193c7a 92ef6f575cc5 a1616cb5a56b a931c4241e1d a1ce3fe79051 553066c0a54e b32b5b79fbdd 2781068c10c6 a57e0e004a13 86dabac6702b 2d7c7f8252a3 1cc5aafe80d5 d8d1eefe1982]
	I1028 03:49:35.475771    2088 ssh_runner.go:195] Run: docker stop 39fd8deedf87 eb0e5a6bd50d effd798b9372 a3a799cc5c81 590b9d6083e7 ed17182b545c 15338a6f8b02 5390d2837e3c 808a48de8b00 cdc09f4247ba 40c71e637bf7 928a83498715 4917fb563755 758d8fd91a40 14e0ef549e95 c50434193c7a 92ef6f575cc5 a1616cb5a56b a931c4241e1d a1ce3fe79051 553066c0a54e b32b5b79fbdd 2781068c10c6 a57e0e004a13 86dabac6702b 2d7c7f8252a3 1cc5aafe80d5 d8d1eefe1982
	I1028 03:49:35.483305    2088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 03:49:35.591682    2088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 03:49:35.597805    2088 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct 28 10:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 28 10:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct 28 10:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Oct 28 10:48 /etc/kubernetes/scheduler.conf
	
	I1028 03:49:35.597845    2088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1028 03:49:35.602948    2088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1028 03:49:35.608033    2088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1028 03:49:35.612549    2088 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 03:49:35.612580    2088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 03:49:35.616834    2088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1028 03:49:35.620583    2088 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 03:49:35.620606    2088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 03:49:35.624331    2088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 03:49:35.628300    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:35.645696    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:36.055295    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:36.179926    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:36.215793    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:36.238348    2088 api_server.go:52] waiting for apiserver process to appear ...
	I1028 03:49:36.238441    2088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 03:49:36.740900    2088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 03:49:37.240530    2088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 03:49:37.245629    2088 api_server.go:72] duration metric: took 1.007280084s to wait for apiserver process to appear ...
	I1028 03:49:37.245635    2088 api_server.go:88] waiting for apiserver healthz status ...
	I1028 03:49:37.245648    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:39.563527    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 03:49:39.563536    2088 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 03:49:39.563541    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:39.605967    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 03:49:39.605977    2088 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 03:49:39.747753    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:39.750643    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 03:49:39.750649    2088 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 03:49:40.247749    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:40.252540    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 03:49:40.252552    2088 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 03:49:40.747738    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:40.751993    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1028 03:49:40.756488    2088 api_server.go:141] control plane version: v1.31.2
	I1028 03:49:40.756497    2088 api_server.go:131] duration metric: took 3.510847416s to wait for apiserver health ...
	I1028 03:49:40.756502    2088 cni.go:84] Creating CNI manager for ""
	I1028 03:49:40.756513    2088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 03:49:40.844599    2088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 03:49:40.847538    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 03:49:40.852405    2088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 03:49:40.860531    2088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 03:49:40.864801    2088 system_pods.go:59] 6 kube-system pods found
	I1028 03:49:40.864810    2088 system_pods.go:61] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 03:49:40.864813    2088 system_pods.go:61] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 03:49:40.864816    2088 system_pods.go:61] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 03:49:40.864818    2088 system_pods.go:61] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 03:49:40.864820    2088 system_pods.go:61] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 03:49:40.864822    2088 system_pods.go:61] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 03:49:40.864824    2088 system_pods.go:74] duration metric: took 4.28775ms to wait for pod list to return data ...
	I1028 03:49:40.864827    2088 node_conditions.go:102] verifying NodePressure condition ...
	I1028 03:49:40.866380    2088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 03:49:40.866386    2088 node_conditions.go:123] node cpu capacity is 2
	I1028 03:49:40.866391    2088 node_conditions.go:105] duration metric: took 1.561875ms to run NodePressure ...
	I1028 03:49:40.866398    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:41.091536    2088 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 03:49:41.094366    2088 kubeadm.go:739] kubelet initialised
	I1028 03:49:41.094372    2088 kubeadm.go:740] duration metric: took 2.825041ms waiting for restarted kubelet to initialise ...
	I1028 03:49:41.094377    2088 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 03:49:41.097693    2088 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:43.112361    2088 pod_ready.go:103] pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace has status "Ready":"False"
	I1028 03:49:44.613352    2088 pod_ready.go:93] pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:44.613378    2088 pod_ready.go:82] duration metric: took 3.515661542s for pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:44.613395    2088 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:46.628063    2088 pod_ready.go:103] pod "etcd-functional-940000" in "kube-system" namespace has status "Ready":"False"
	I1028 03:49:49.123390    2088 pod_ready.go:93] pod "etcd-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:49.123403    2088 pod_ready.go:82] duration metric: took 4.509982709s for pod "etcd-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:49.123415    2088 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.636353    2088 pod_ready.go:93] pod "kube-apiserver-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:50.636379    2088 pod_ready.go:82] duration metric: took 1.51294775s for pod "kube-apiserver-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.636398    2088 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hllfn" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.643808    2088 pod_ready.go:93] pod "kube-proxy-hllfn" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:50.643820    2088 pod_ready.go:82] duration metric: took 7.414291ms for pod "kube-proxy-hllfn" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.643830    2088 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.650578    2088 pod_ready.go:93] pod "kube-scheduler-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:50.650591    2088 pod_ready.go:82] duration metric: took 6.753542ms for pod "kube-scheduler-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.650603    2088 pod_ready.go:39] duration metric: took 9.556186208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 03:49:50.650623    2088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 03:49:50.664935    2088 ops.go:34] apiserver oom_adj: -16
	I1028 03:49:50.664943    2088 kubeadm.go:597] duration metric: took 15.205628041s to restartPrimaryControlPlane
	I1028 03:49:50.664949    2088 kubeadm.go:394] duration metric: took 15.215602209s to StartCluster
	I1028 03:49:50.664963    2088 settings.go:142] acquiring lock: {Name:mkb494d4e656a3be4717ac10e07a477c00ee7ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:49:50.665172    2088 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:49:50.665795    2088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:49:50.666230    2088 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 03:49:50.666245    2088 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 03:49:50.666316    2088 addons.go:69] Setting storage-provisioner=true in profile "functional-940000"
	I1028 03:49:50.666342    2088 addons.go:234] Setting addon storage-provisioner=true in "functional-940000"
	W1028 03:49:50.666348    2088 addons.go:243] addon storage-provisioner should already be in state true
	I1028 03:49:50.666367    2088 host.go:66] Checking if "functional-940000" exists ...
	I1028 03:49:50.666383    2088 addons.go:69] Setting default-storageclass=true in profile "functional-940000"
	I1028 03:49:50.666400    2088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-940000"
	I1028 03:49:50.666466    2088 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 03:49:50.668304    2088 addons.go:234] Setting addon default-storageclass=true in "functional-940000"
	W1028 03:49:50.668310    2088 addons.go:243] addon default-storageclass should already be in state true
	I1028 03:49:50.668323    2088 host.go:66] Checking if "functional-940000" exists ...
	I1028 03:49:50.670497    2088 out.go:177] * Verifying Kubernetes components...
	I1028 03:49:50.671125    2088 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 03:49:50.674155    2088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 03:49:50.674170    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:50.678398    2088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 03:49:50.682418    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:50.685418    2088 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 03:49:50.685423    2088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 03:49:50.685431    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:50.812154    2088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 03:49:50.819548    2088 node_ready.go:35] waiting up to 6m0s for node "functional-940000" to be "Ready" ...
	I1028 03:49:50.820189    2088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 03:49:50.821178    2088 node_ready.go:49] node "functional-940000" has status "Ready":"True"
	I1028 03:49:50.821185    2088 node_ready.go:38] duration metric: took 1.624083ms for node "functional-940000" to be "Ready" ...
	I1028 03:49:50.821188    2088 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 03:49:50.823499    2088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.825528    2088 pod_ready.go:93] pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:50.825532    2088 pod_ready.go:82] duration metric: took 2.027583ms for pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.825535    2088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.887957    2088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 03:49:51.119266    2088 pod_ready.go:93] pod "etcd-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:51.119271    2088 pod_ready.go:82] duration metric: took 293.73225ms for pod "etcd-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.119274    2088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.162785    2088 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 03:49:51.166710    2088 addons.go:510] duration metric: took 500.474208ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 03:49:51.522145    2088 pod_ready.go:93] pod "kube-apiserver-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:51.522170    2088 pod_ready.go:82] duration metric: took 402.886584ms for pod "kube-apiserver-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.522186    2088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hllfn" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.923833    2088 pod_ready.go:93] pod "kube-proxy-hllfn" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:51.923864    2088 pod_ready.go:82] duration metric: took 401.6655ms for pod "kube-proxy-hllfn" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.923882    2088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:52.326889    2088 pod_ready.go:93] pod "kube-scheduler-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:52.326920    2088 pod_ready.go:82] duration metric: took 403.02125ms for pod "kube-scheduler-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:52.326942    2088 pod_ready.go:39] duration metric: took 1.505741542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 03:49:52.326983    2088 api_server.go:52] waiting for apiserver process to appear ...
	I1028 03:49:52.327312    2088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 03:49:52.346533    2088 api_server.go:72] duration metric: took 1.68027475s to wait for apiserver process to appear ...
	I1028 03:49:52.346548    2088 api_server.go:88] waiting for apiserver healthz status ...
	I1028 03:49:52.346566    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:52.353514    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1028 03:49:52.354416    2088 api_server.go:141] control plane version: v1.31.2
	I1028 03:49:52.354425    2088 api_server.go:131] duration metric: took 7.872ms to wait for apiserver health ...
	I1028 03:49:52.354430    2088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 03:49:52.531313    2088 system_pods.go:59] 6 kube-system pods found
	I1028 03:49:52.531354    2088 system_pods.go:61] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:52.531362    2088 system_pods.go:61] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:52.531368    2088 system_pods.go:61] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:52.531374    2088 system_pods.go:61] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:52.531379    2088 system_pods.go:61] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:52.531387    2088 system_pods.go:61] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:52.531398    2088 system_pods.go:74] duration metric: took 176.961291ms to wait for pod list to return data ...
	I1028 03:49:52.531416    2088 default_sa.go:34] waiting for default service account to be created ...
	I1028 03:49:52.725337    2088 default_sa.go:45] found service account: "default"
	I1028 03:49:52.725371    2088 default_sa.go:55] duration metric: took 193.939583ms for default service account to be created ...
	I1028 03:49:52.725387    2088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 03:49:52.930005    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:52.930039    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:52.930049    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:52.930056    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:52.930064    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:52.930070    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:52.930076    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:52.930128    2088 retry.go:31] will retry after 290.195252ms: missing components: kube-controller-manager
	I1028 03:49:53.235613    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:53.235648    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:53.235662    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:53.235668    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:53.235676    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:53.235681    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:53.235687    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:53.235714    2088 retry.go:31] will retry after 383.302622ms: missing components: kube-controller-manager
	I1028 03:49:53.633343    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:53.633373    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:53.633387    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:53.633394    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:53.633399    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:53.633405    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:53.633411    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:53.633437    2088 retry.go:31] will retry after 466.822653ms: missing components: kube-controller-manager
	I1028 03:49:54.115299    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:54.115337    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:54.115351    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:54.115357    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:54.115364    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:54.115370    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:54.115376    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:54.115406    2088 retry.go:31] will retry after 439.505374ms: missing components: kube-controller-manager
	I1028 03:49:54.573283    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:54.573317    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:54.573325    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:54.573330    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:54.573335    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:54.573339    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:54.573343    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:54.573366    2088 retry.go:31] will retry after 518.78481ms: missing components: kube-controller-manager
	I1028 03:49:55.106876    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:55.106907    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:55.106920    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:55.106927    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:55.106933    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:55.106939    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:55.106943    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:55.106971    2088 retry.go:31] will retry after 634.219295ms: missing components: kube-controller-manager
	I1028 03:49:55.751992    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:55.752023    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:55.752031    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:55.752035    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:55.752039    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:55.752051    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:55.752055    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:55.752082    2088 retry.go:31] will retry after 909.805144ms: missing components: kube-controller-manager
	I1028 03:49:56.668065    2088 system_pods.go:86] 7 kube-system pods found
	I1028 03:49:56.668074    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:56.668076    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:56.668078    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:56.668079    2088 system_pods.go:89] "kube-controller-manager-functional-940000" [d2fef415-2950-4f08-8dea-980e6a61a55f] Pending
	I1028 03:49:56.668081    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:56.668082    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:56.668084    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:56.668092    2088 retry.go:31] will retry after 1.150009358s: missing components: kube-controller-manager
	I1028 03:49:57.824769    2088 system_pods.go:86] 7 kube-system pods found
	I1028 03:49:57.824782    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:57.824786    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:57.824788    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:57.824796    2088 system_pods.go:89] "kube-controller-manager-functional-940000" [d2fef415-2950-4f08-8dea-980e6a61a55f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 03:49:57.824799    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:57.824802    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:57.824805    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:57.824810    2088 system_pods.go:126] duration metric: took 5.099395291s to wait for k8s-apps to be running ...
	I1028 03:49:57.824816    2088 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 03:49:57.824978    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 03:49:57.835540    2088 system_svc.go:56] duration metric: took 10.706541ms WaitForService to wait for kubelet
	I1028 03:49:57.835554    2088 kubeadm.go:582] duration metric: took 7.169281084s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 03:49:57.835570    2088 node_conditions.go:102] verifying NodePressure condition ...
	I1028 03:49:57.837807    2088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 03:49:57.837812    2088 node_conditions.go:123] node cpu capacity is 2
	I1028 03:49:57.837818    2088 node_conditions.go:105] duration metric: took 2.245334ms to run NodePressure ...
	I1028 03:49:57.837825    2088 start.go:241] waiting for startup goroutines ...
	I1028 03:49:57.837828    2088 start.go:246] waiting for cluster config update ...
	I1028 03:49:57.837835    2088 start.go:255] writing updated cluster config ...
	I1028 03:49:57.838225    2088 ssh_runner.go:195] Run: rm -f paused
	I1028 03:49:57.873351    2088 start.go:600] kubectl: 1.30.2, cluster: 1.31.2 (minor skew: 1)
	I1028 03:49:57.877549    2088 out.go:177] * Done! kubectl is now configured to use "functional-940000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.669530432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:40 functional-940000 cri-dockerd[5932]: time="2024-10-28T10:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/77b9c769130ade1ea0562d08866b3d8911e752ae7f4b64abb0c34b5d24c0f2eb/resolv.conf as [nameserver 192.168.105.1]"
	Oct 28 10:49:40 functional-940000 cri-dockerd[5932]: time="2024-10-28T10:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/46c1dc34e52065b95bc83bd9c6e8c1d07bf77ebea34aa73ef77525cfba1ace60/resolv.conf as [nameserver 192.168.105.1]"
	Oct 28 10:49:40 functional-940000 cri-dockerd[5932]: time="2024-10-28T10:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/68601b2c8889be11d2b36d4792a0f33ebceb1051868b9c7e06d2e02662768127/resolv.conf as [nameserver 192.168.105.1]"
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.776157597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.776207515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.776215349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.776074387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.776185723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.776194890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.776579441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.776967909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.812232803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.812263846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.812281304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:40 functional-940000 dockerd[5678]: time="2024-10-28T10:49:40.812314222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:56 functional-940000 dockerd[5678]: time="2024-10-28T10:49:56.684439976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 10:49:56 functional-940000 dockerd[5678]: time="2024-10-28T10:49:56.684495477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 10:49:56 functional-940000 dockerd[5678]: time="2024-10-28T10:49:56.684512311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:56 functional-940000 dockerd[5678]: time="2024-10-28T10:49:56.684568187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:56 functional-940000 cri-dockerd[5932]: time="2024-10-28T10:49:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4df7e349a8eb6d2d41976967679d5ac76d23e46a5050f55412fa0671efe507c2/resolv.conf as [nameserver 192.168.105.1]"
	Oct 28 10:49:56 functional-940000 dockerd[5678]: time="2024-10-28T10:49:56.760322121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 10:49:56 functional-940000 dockerd[5678]: time="2024-10-28T10:49:56.760418415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 10:49:56 functional-940000 dockerd[5678]: time="2024-10-28T10:49:56.760451457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:49:56 functional-940000 dockerd[5678]: time="2024-10-28T10:49:56.760499958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e1379d9874bdb       9404aea098d9e       2 seconds ago        Running             kube-controller-manager   2                   4df7e349a8eb6       kube-controller-manager-functional-940000
	6f3f41d1b514b       2f6c962e7b831       18 seconds ago       Running             coredns                   2                   68601b2c8889b       coredns-7c65d6cfc9-nmlwl
	f5e76a37ed6f1       ba04bb24b9575       18 seconds ago       Running             storage-provisioner       2                   46c1dc34e5206       storage-provisioner
	02ca52d4decb3       021d242013305       18 seconds ago       Running             kube-proxy                2                   77b9c769130ad       kube-proxy-hllfn
	6249c5bc32918       27e3830e14027       21 seconds ago       Running             etcd                      2                   5fd2b7378b694       etcd-functional-940000
	96355ca0248be       d6b061e73ae45       21 seconds ago       Running             kube-scheduler            2                   1687a96d0e62e       kube-scheduler-functional-940000
	3d5f6e4b3f879       f9c26480f1e72       21 seconds ago       Running             kube-apiserver            0                   35eae2c916071       kube-apiserver-functional-940000
	39fd8deedf87f       2f6c962e7b831       About a minute ago   Exited              coredns                   1                   a3a799cc5c81b       coredns-7c65d6cfc9-nmlwl
	eb0e5a6bd50d0       ba04bb24b9575       About a minute ago   Exited              storage-provisioner       1                   590b9d6083e73       storage-provisioner
	effd798b9372f       021d242013305       About a minute ago   Exited              kube-proxy                1                   ed17182b545cf       kube-proxy-hllfn
	15338a6f8b02a       27e3830e14027       About a minute ago   Exited              etcd                      1                   4917fb5637554       etcd-functional-940000
	5390d2837e3c6       f9c26480f1e72       About a minute ago   Exited              kube-apiserver            1                   40c71e637bf7a       kube-apiserver-functional-940000
	808a48de8b008       9404aea098d9e       About a minute ago   Exited              kube-controller-manager   1                   758d8fd91a405       kube-controller-manager-functional-940000
	cdc09f4247ba8       d6b061e73ae45       About a minute ago   Exited              kube-scheduler            1                   928a834987153       kube-scheduler-functional-940000
	
	
	==> coredns [39fd8deedf87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50744 - 56984 "HINFO IN 9008655988936561773.4025288987151893064. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010423885s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6f3f41d1b514] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50050 - 62484 "HINFO IN 2111370520541573694.6997983717341631236. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004942411s
	
	
	==> describe nodes <==
	Name:               functional-940000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-940000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=functional-940000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T03_48_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 10:48:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-940000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 10:49:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 10:49:39 +0000   Mon, 28 Oct 2024 10:48:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 10:49:39 +0000   Mon, 28 Oct 2024 10:48:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 10:49:39 +0000   Mon, 28 Oct 2024 10:48:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 10:49:39 +0000   Mon, 28 Oct 2024 10:48:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-940000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 459409642a424b32848f38ff16abbdd1
	  System UUID:                459409642a424b32848f38ff16abbdd1
	  Boot ID:                    10f2f602-1e59-480d-9138-a9cae6ead9ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-nmlwl                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     90s
	  kube-system                 etcd-functional-940000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         96s
	  kube-system                 kube-apiserver-functional-940000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-controller-manager-functional-940000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-proxy-hllfn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-functional-940000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 59s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node functional-940000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x7 over 100s)  kubelet          Node functional-940000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node functional-940000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node functional-940000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node functional-940000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node functional-940000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                92s                  kubelet          Node functional-940000 status is now: NodeReady
	  Normal  RegisteredNode           91s                  node-controller  Node functional-940000 event: Registered Node functional-940000 in Controller
	  Normal  Starting                 64s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)    kubelet          Node functional-940000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)    kubelet          Node functional-940000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x7 over 64s)    kubelet          Node functional-940000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           58s                  node-controller  Node functional-940000 event: Registered Node functional-940000 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node functional-940000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node functional-940000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node functional-940000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +7.319901] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.097092] systemd-fstab-generator[3518]: Ignoring "noauto" option for root device
	[  +0.087645] systemd-fstab-generator[3530]: Ignoring "noauto" option for root device
	[  +0.090563] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.198638] systemd-fstab-generator[3712]: Ignoring "noauto" option for root device
	[  +1.014026] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	[  +4.410500] kauditd_printk_skb: 199 callbacks suppressed
	[Oct28 10:49] systemd-fstab-generator[4755]: Ignoring "noauto" option for root device
	[  +0.059571] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.721723] systemd-fstab-generator[5183]: Ignoring "noauto" option for root device
	[  +0.053055] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.105746] systemd-fstab-generator[5216]: Ignoring "noauto" option for root device
	[  +0.096701] systemd-fstab-generator[5228]: Ignoring "noauto" option for root device
	[  +0.110427] systemd-fstab-generator[5242]: Ignoring "noauto" option for root device
	[  +5.121572] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.337580] systemd-fstab-generator[5881]: Ignoring "noauto" option for root device
	[  +0.091112] systemd-fstab-generator[5893]: Ignoring "noauto" option for root device
	[  +0.086138] systemd-fstab-generator[5905]: Ignoring "noauto" option for root device
	[  +0.091194] systemd-fstab-generator[5920]: Ignoring "noauto" option for root device
	[  +0.227155] systemd-fstab-generator[6086]: Ignoring "noauto" option for root device
	[  +0.878933] systemd-fstab-generator[6209]: Ignoring "noauto" option for root device
	[  +4.425561] kauditd_printk_skb: 189 callbacks suppressed
	[ +10.195576] systemd-fstab-generator[7111]: Ignoring "noauto" option for root device
	[  +0.052824] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.850589] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [15338a6f8b02] <==
	{"level":"info","ts":"2024-10-28T10:48:56.844757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T10:48:56.844850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-28T10:48:56.844886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T10:48:56.844904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-28T10:48:56.844931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-28T10:48:56.844969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-28T10:48:56.849491Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-940000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T10:48:56.849580Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T10:48:56.850299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T10:48:56.850497Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T10:48:56.850314Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T10:48:56.851862Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:48:56.852258Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:48:56.854427Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-28T10:48:56.854471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T10:49:22.450408Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-28T10:49:22.450434Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-940000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-28T10:49:22.450490Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T10:49:22.450531Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T10:49:22.457375Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T10:49:22.457397Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-28T10:49:22.457418Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-28T10:49:22.459200Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-28T10:49:22.459235Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-28T10:49:22.459239Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-940000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [6249c5bc3291] <==
	{"level":"info","ts":"2024-10-28T10:49:37.317907Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-10-28T10:49:37.317968Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T10:49:37.317996Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T10:49:37.319220Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:49:37.319874Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T10:49:37.322901Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-28T10:49:37.323207Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-28T10:49:37.323484Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T10:49:37.323511Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T10:49:39.088430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-28T10:49:39.088590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-28T10:49:39.088681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-28T10:49:39.088721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-28T10:49:39.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-28T10:49:39.088848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-28T10:49:39.088930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-28T10:49:39.094341Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-940000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T10:49:39.094631Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T10:49:39.094775Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T10:49:39.094647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T10:49:39.094676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T10:49:39.096788Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:49:39.096788Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:49:39.098598Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-28T10:49:39.099391Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:49:58 up 1 min,  0 users,  load average: 0.81, 0.34, 0.13
	Linux functional-940000 5.10.207 #1 SMP PREEMPT Tue Oct 15 16:10:02 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3d5f6e4b3f87] <==
	I1028 10:49:39.684284       1 policy_source.go:224] refreshing policies
	I1028 10:49:39.684309       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1028 10:49:39.695823       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 10:49:39.695884       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 10:49:39.695930       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1028 10:49:39.696346       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 10:49:39.697065       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1028 10:49:39.697150       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 10:49:39.697501       1 shared_informer.go:320] Caches are synced for configmaps
	I1028 10:49:39.697554       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1028 10:49:39.697593       1 aggregator.go:171] initial CRD sync complete...
	I1028 10:49:39.697610       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 10:49:39.697641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 10:49:39.697660       1 cache.go:39] Caches are synced for autoregister controller
	I1028 10:49:39.698763       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1028 10:49:39.724691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 10:49:40.598695       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1028 10:49:40.701877       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1028 10:49:40.702485       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 10:49:40.707799       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 10:49:40.952524       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 10:49:40.956457       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 10:49:40.966999       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 10:49:40.973945       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 10:49:40.975909       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [5390d2837e3c] <==
	W1028 10:49:31.705778       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.719341       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.719341       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.779506       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.782011       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.789649       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.803340       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.812931       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.875569       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.891247       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.893625       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.941572       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:31.974075       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.068667       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.071207       1 logging.go:55] [core] [Channel #16 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.117204       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.153057       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.227069       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.288930       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.309869       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.334470       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.343152       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.349810       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.446323       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 10:49:32.449868       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [808a48de8b00] <==
	I1028 10:49:00.785896       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-940000\" does not exist"
	I1028 10:49:00.787237       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 10:49:00.801590       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 10:49:00.807285       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 10:49:00.808742       1 shared_informer.go:320] Caches are synced for namespace
	I1028 10:49:00.809617       1 shared_informer.go:320] Caches are synced for service account
	I1028 10:49:00.811984       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1028 10:49:00.832983       1 shared_informer.go:320] Caches are synced for TTL
	I1028 10:49:00.835219       1 shared_informer.go:320] Caches are synced for node
	I1028 10:49:00.835353       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1028 10:49:00.835368       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1028 10:49:00.835371       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1028 10:49:00.835373       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1028 10:49:00.835430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-940000"
	I1028 10:49:00.837169       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 10:49:00.884459       1 shared_informer.go:320] Caches are synced for GC
	I1028 10:49:00.893880       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 10:49:00.933122       1 shared_informer.go:320] Caches are synced for taint
	I1028 10:49:00.933186       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1028 10:49:00.933239       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-940000"
	I1028 10:49:00.933275       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1028 10:49:00.983020       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 10:49:01.357196       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 10:49:01.435138       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 10:49:01.435194       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [e1379d9874bd] <==
	I1028 10:49:58.002740       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1028 10:49:58.052276       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1028 10:49:58.052315       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1028 10:49:58.052321       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1028 10:49:58.103111       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1028 10:49:58.103147       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1028 10:49:58.152712       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1028 10:49:58.152753       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1028 10:49:58.152762       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1028 10:49:58.202440       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1028 10:49:58.202523       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1028 10:49:58.202530       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1028 10:49:58.252269       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1028 10:49:58.252300       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1028 10:49:58.252306       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1028 10:49:58.302455       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1028 10:49:58.302467       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1028 10:49:58.302477       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1028 10:49:58.302498       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1028 10:49:58.302503       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1028 10:49:58.502323       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1028 10:49:58.502403       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1028 10:49:58.502411       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1028 10:49:58.504618       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1028 10:49:58.516497       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	
	
	==> kube-proxy [02ca52d4decb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 10:49:40.856450       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 10:49:40.904765       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1028 10:49:40.904847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 10:49:40.926998       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 10:49:40.927019       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 10:49:40.927034       1 server_linux.go:169] "Using iptables Proxier"
	I1028 10:49:40.927845       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 10:49:40.927983       1 server.go:483] "Version info" version="v1.31.2"
	I1028 10:49:40.927988       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:49:40.928902       1 config.go:199] "Starting service config controller"
	I1028 10:49:40.928948       1 config.go:328] "Starting node config controller"
	I1028 10:49:40.928974       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 10:49:40.929012       1 config.go:105] "Starting endpoint slice config controller"
	I1028 10:49:40.929032       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 10:49:40.931670       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 10:49:40.931745       1 shared_informer.go:320] Caches are synced for service config
	I1028 10:49:41.029350       1 shared_informer.go:320] Caches are synced for node config
	I1028 10:49:41.029343       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [effd798b9372] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 10:48:58.720778       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 10:48:58.731500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1028 10:48:58.731536       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 10:48:58.750529       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 10:48:58.750552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 10:48:58.750567       1 server_linux.go:169] "Using iptables Proxier"
	I1028 10:48:58.751283       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 10:48:58.751404       1 server.go:483] "Version info" version="v1.31.2"
	I1028 10:48:58.751412       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:48:58.751913       1 config.go:199] "Starting service config controller"
	I1028 10:48:58.751927       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 10:48:58.751939       1 config.go:105] "Starting endpoint slice config controller"
	I1028 10:48:58.751995       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 10:48:58.752189       1 config.go:328] "Starting node config controller"
	I1028 10:48:58.752196       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 10:48:58.853014       1 shared_informer.go:320] Caches are synced for node config
	I1028 10:48:58.853035       1 shared_informer.go:320] Caches are synced for service config
	I1028 10:48:58.853048       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [96355ca0248b] <==
	I1028 10:49:37.544473       1 serving.go:386] Generated self-signed cert in-memory
	W1028 10:49:39.616327       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 10:49:39.616428       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 10:49:39.616471       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 10:49:39.616490       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 10:49:39.631831       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 10:49:39.631934       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:49:39.632995       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 10:49:39.634915       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 10:49:39.634977       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 10:49:39.635004       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 10:49:39.735860       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cdc09f4247ba] <==
	I1028 10:48:55.284205       1 serving.go:386] Generated self-signed cert in-memory
	W1028 10:48:57.374678       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 10:48:57.374718       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 10:48:57.374739       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 10:48:57.374746       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 10:48:57.409340       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 10:48:57.409356       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:48:57.410259       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 10:48:57.415476       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 10:48:57.415702       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 10:48:57.415753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 10:48:57.516619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 10:49:22.440059       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1028 10:49:22.440082       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1028 10:49:22.440150       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 28 10:49:39 functional-940000 kubelet[6216]: I1028 10:49:39.712551    6216 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 10:49:39 functional-940000 kubelet[6216]: I1028 10:49:39.712921    6216 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 10:49:39 functional-940000 kubelet[6216]: E1028 10:49:39.763511    6216 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-functional-940000\" already exists" pod="kube-system/kube-scheduler-functional-940000"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.269294    6216 apiserver.go:52] "Watching apiserver"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.274445    6216 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-940000" podUID="195933f4-911f-4afb-b3f6-9b4a533a4e9c"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.293039    6216 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-940000"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.293739    6216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca027973e7a3c8574ac52cccc4b68639" path="/var/lib/kubelet/pods/ca027973e7a3c8574ac52cccc4b68639/volumes"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.294074    6216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f801f0ef8e4c9584725f8d8afdbb159a" path="/var/lib/kubelet/pods/f801f0ef8e4c9584725f8d8afdbb159a/volumes"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.340153    6216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-940000" podStartSLOduration=0.340140539 podStartE2EDuration="340.140539ms" podCreationTimestamp="2024-10-28 10:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 10:49:40.337108632 +0000 UTC m=+4.111304275" watchObservedRunningTime="2024-10-28 10:49:40.340140539 +0000 UTC m=+4.114336181"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.369629    6216 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: E1028 10:49:40.387330    6216 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-functional-940000\" already exists" pod="kube-system/etcd-functional-940000"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.423320    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed1390bd-aeb1-46b7-9e4e-5d2956b1b205-xtables-lock\") pod \"kube-proxy-hllfn\" (UID: \"ed1390bd-aeb1-46b7-9e4e-5d2956b1b205\") " pod="kube-system/kube-proxy-hllfn"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.423398    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58ae0bb2-4407-4b87-a38d-80e08b35bf8e-tmp\") pod \"storage-provisioner\" (UID: \"58ae0bb2-4407-4b87-a38d-80e08b35bf8e\") " pod="kube-system/storage-provisioner"
	Oct 28 10:49:40 functional-940000 kubelet[6216]: I1028 10:49:40.423432    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed1390bd-aeb1-46b7-9e4e-5d2956b1b205-lib-modules\") pod \"kube-proxy-hllfn\" (UID: \"ed1390bd-aeb1-46b7-9e4e-5d2956b1b205\") " pod="kube-system/kube-proxy-hllfn"
	Oct 28 10:49:44 functional-940000 kubelet[6216]: I1028 10:49:44.234626    6216 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: E1028 10:49:56.268583    6216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f801f0ef8e4c9584725f8d8afdbb159a" containerName="kube-apiserver"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: I1028 10:49:56.269378    6216 memory_manager.go:354] "RemoveStaleState removing state" podUID="f801f0ef8e4c9584725f8d8afdbb159a" containerName="kube-apiserver"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: I1028 10:49:56.269458    6216 memory_manager.go:354] "RemoveStaleState removing state" podUID="f801f0ef8e4c9584725f8d8afdbb159a" containerName="kube-apiserver"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: I1028 10:49:56.430024    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca027973e7a3c8574ac52cccc4b68639-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-940000\" (UID: \"ca027973e7a3c8574ac52cccc4b68639\") " pod="kube-system/kube-controller-manager-functional-940000"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: I1028 10:49:56.430071    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca027973e7a3c8574ac52cccc4b68639-kubeconfig\") pod \"kube-controller-manager-functional-940000\" (UID: \"ca027973e7a3c8574ac52cccc4b68639\") " pod="kube-system/kube-controller-manager-functional-940000"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: I1028 10:49:56.430091    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca027973e7a3c8574ac52cccc4b68639-flexvolume-dir\") pod \"kube-controller-manager-functional-940000\" (UID: \"ca027973e7a3c8574ac52cccc4b68639\") " pod="kube-system/kube-controller-manager-functional-940000"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: I1028 10:49:56.430108    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca027973e7a3c8574ac52cccc4b68639-k8s-certs\") pod \"kube-controller-manager-functional-940000\" (UID: \"ca027973e7a3c8574ac52cccc4b68639\") " pod="kube-system/kube-controller-manager-functional-940000"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: I1028 10:49:56.430127    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca027973e7a3c8574ac52cccc4b68639-ca-certs\") pod \"kube-controller-manager-functional-940000\" (UID: \"ca027973e7a3c8574ac52cccc4b68639\") " pod="kube-system/kube-controller-manager-functional-940000"
	Oct 28 10:49:56 functional-940000 kubelet[6216]: I1028 10:49:56.738971    6216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df7e349a8eb6d2d41976967679d5ac76d23e46a5050f55412fa0671efe507c2"
	Oct 28 10:49:57 functional-940000 kubelet[6216]: I1028 10:49:57.775838    6216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-functional-940000" podStartSLOduration=1.7758206159999999 podStartE2EDuration="1.775820616s" podCreationTimestamp="2024-10-28 10:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-28 10:49:57.775515942 +0000 UTC m=+21.549711585" watchObservedRunningTime="2024-10-28 10:49:57.775820616 +0000 UTC m=+21.550016258"
	
	
	==> storage-provisioner [eb0e5a6bd50d] <==
	I1028 10:48:58.697853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 10:48:58.704167       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 10:48:58.704285       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 10:48:58.708270       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 10:48:58.708500       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d80c73e-ab9d-4477-83ba-bc4a70734202", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-940000_d95c716e-23a1-4b89-aa56-48e1108aafa3 became leader
	I1028 10:48:58.708514       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-940000_d95c716e-23a1-4b89-aa56-48e1108aafa3!
	I1028 10:48:58.809168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-940000_d95c716e-23a1-4b89-aa56-48e1108aafa3!
	
	
	==> storage-provisioner [f5e76a37ed6f] <==
	I1028 10:49:40.807808       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 10:49:40.816549       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 10:49:40.816566       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 10:49:58.308932       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 10:49:58.309060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d80c73e-ab9d-4477-83ba-bc4a70734202", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-940000_c31cbe99-8e75-4a10-813c-6c40cc60aaa7 became leader
	I1028 10:49:58.309074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-940000_c31cbe99-8e75-4a10-813c-6c40cc60aaa7!
	I1028 10:49:58.409720       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-940000_c31cbe99-8e75-4a10-813c-6c40cc60aaa7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-940000 -n functional-940000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-940000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (33.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-940000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-940000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-82sgl" [94f2800c-2794-4739-b7fc-7a902692f7fc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-82sgl" [94f2800c-2794-4739-b7fc-7a902692f7fc] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003784291s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30598
functional_test.go:1661: error fetching http://192.168.105.4:30598: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
I1028 03:50:28.752308    1598 retry.go:31] will retry after 1.142369154s: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30598: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
I1028 03:50:29.898635    1598 retry.go:31] will retry after 1.727534867s: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30598: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
I1028 03:50:31.630205    1598 retry.go:31] will retry after 3.211572733s: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30598: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
I1028 03:50:34.845720    1598 retry.go:31] will retry after 4.290558784s: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30598: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
I1028 03:50:39.139473    1598 retry.go:31] will retry after 7.512272213s: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30598: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
I1028 03:50:46.654525    1598 retry.go:31] will retry after 6.14339987s: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30598: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30598: Get "http://192.168.105.4:30598": dial tcp 192.168.105.4:30598: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-940000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-82sgl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-940000/192.168.105.4
Start Time:       Mon, 28 Oct 2024 03:50:20 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://8955cdd9ab4bc4e217f8ab0788d0364d08253fe0868e77dfebe3fcc105922fb0
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 28 Oct 2024 03:50:35 -0700
Finished:     Mon, 28 Oct 2024 03:50:35 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dpq6c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dpq6c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  32s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-82sgl to functional-940000
Normal   Pulled     17s (x3 over 31s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    17s (x3 over 31s)  kubelet            Created container echoserver-arm
Normal   Started    17s (x3 over 31s)  kubelet            Started container echoserver-arm
Warning  BackOff    3s (x4 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-82sgl_default(94f2800c-2794-4739-b7fc-7a902692f7fc)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-940000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-940000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.19.20
IPs:                      10.97.19.20
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30598/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-940000 -n functional-940000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-940000 service list                                                                                       | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-940000 service                                                                                            | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-940000                                                                                                    | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-940000 service                                                                                            | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| addons  | functional-940000 addons list                                                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	| addons  | functional-940000 addons list                                                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-940000 service                                                                                            | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh findmnt                                                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-940000                                                                                                 | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2044071837/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh findmnt                                                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh -- ls                                                                                          | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh cat                                                                                            | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | /mount-9p/test-1730112645179756000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh stat                                                                                           | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh stat                                                                                           | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh sudo                                                                                           | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh findmnt                                                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-940000                                                                                                 | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1454789778/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh findmnt                                                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh -- ls                                                                                          | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT | 28 Oct 24 03:50 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh sudo                                                                                           | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-940000                                                                                                 | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-940000                                                                                                 | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh findmnt                                                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount   | -p functional-940000                                                                                                 | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-940000 ssh findmnt                                                                                        | functional-940000 | jenkins | v1.34.0 | 28 Oct 24 03:50 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 03:49:21
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 03:49:21.169123    2088 out.go:345] Setting OutFile to fd 1 ...
	I1028 03:49:21.169275    2088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:49:21.169277    2088 out.go:358] Setting ErrFile to fd 2...
	I1028 03:49:21.169279    2088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:49:21.169406    2088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 03:49:21.170543    2088 out.go:352] Setting JSON to false
	I1028 03:49:21.188340    2088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1132,"bootTime":1730111429,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 03:49:21.188413    2088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 03:49:21.193299    2088 out.go:177] * [functional-940000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 03:49:21.201291    2088 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 03:49:21.201352    2088 notify.go:220] Checking for updates...
	I1028 03:49:21.208238    2088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:49:21.211305    2088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 03:49:21.214277    2088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 03:49:21.217260    2088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 03:49:21.220263    2088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 03:49:21.223599    2088 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 03:49:21.223652    2088 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 03:49:21.227221    2088 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 03:49:21.234302    2088 start.go:297] selected driver: qemu2
	I1028 03:49:21.234307    2088 start.go:901] validating driver "qemu2" against &{Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:49:21.234365    2088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 03:49:21.236852    2088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 03:49:21.236874    2088 cni.go:84] Creating CNI manager for ""
	I1028 03:49:21.236898    2088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 03:49:21.236939    2088 start.go:340] cluster config:
	{Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-940000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:49:21.241346    2088 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 03:49:21.249244    2088 out.go:177] * Starting "functional-940000" primary control-plane node in "functional-940000" cluster
	I1028 03:49:21.253330    2088 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 03:49:21.253345    2088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 03:49:21.253351    2088 cache.go:56] Caching tarball of preloaded images
	I1028 03:49:21.253422    2088 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 03:49:21.253428    2088 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 03:49:21.253491    2088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/config.json ...
	I1028 03:49:21.253897    2088 start.go:360] acquireMachinesLock for functional-940000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 03:49:21.253942    2088 start.go:364] duration metric: took 40.583µs to acquireMachinesLock for "functional-940000"
	I1028 03:49:21.253949    2088 start.go:96] Skipping create...Using existing machine configuration
	I1028 03:49:21.253951    2088 fix.go:54] fixHost starting: 
	I1028 03:49:21.254548    2088 fix.go:112] recreateIfNeeded on functional-940000: state=Running err=<nil>
	W1028 03:49:21.254554    2088 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 03:49:21.263286    2088 out.go:177] * Updating the running qemu2 "functional-940000" VM ...
	I1028 03:49:21.267282    2088 machine.go:93] provisionDockerMachine start ...
	I1028 03:49:21.267334    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.267486    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.267489    2088 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 03:49:21.309800    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-940000
	
	I1028 03:49:21.309809    2088 buildroot.go:166] provisioning hostname "functional-940000"
	I1028 03:49:21.309851    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.309964    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.309968    2088 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-940000 && echo "functional-940000" | sudo tee /etc/hostname
	I1028 03:49:21.352660    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-940000
	
	I1028 03:49:21.352708    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.352813    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.352819    2088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-940000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-940000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-940000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 03:49:21.392478    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 03:49:21.392485    2088 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19876-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19876-1087/.minikube}
	I1028 03:49:21.392494    2088 buildroot.go:174] setting up certificates
	I1028 03:49:21.392498    2088 provision.go:84] configureAuth start
	I1028 03:49:21.392504    2088 provision.go:143] copyHostCerts
	I1028 03:49:21.392573    2088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem, removing ...
	I1028 03:49:21.392577    2088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem
	I1028 03:49:21.392817    2088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem (1679 bytes)
	I1028 03:49:21.393030    2088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem, removing ...
	I1028 03:49:21.393033    2088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem
	I1028 03:49:21.393088    2088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem (1078 bytes)
	I1028 03:49:21.393208    2088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem, removing ...
	I1028 03:49:21.393210    2088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem
	I1028 03:49:21.393258    2088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem (1123 bytes)
	I1028 03:49:21.393350    2088 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem org=jenkins.functional-940000 san=[127.0.0.1 192.168.105.4 functional-940000 localhost minikube]
	I1028 03:49:21.514706    2088 provision.go:177] copyRemoteCerts
	I1028 03:49:21.514754    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 03:49:21.514760    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:21.537083    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 03:49:21.545399    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 03:49:21.553874    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 03:49:21.562772    2088 provision.go:87] duration metric: took 170.266542ms to configureAuth
	I1028 03:49:21.562778    2088 buildroot.go:189] setting minikube options for container-runtime
	I1028 03:49:21.562908    2088 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 03:49:21.562953    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.563043    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.563046    2088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 03:49:21.602873    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 03:49:21.602878    2088 buildroot.go:70] root file system type: tmpfs
	I1028 03:49:21.602923    2088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 03:49:21.602987    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.603083    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.603114    2088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 03:49:21.645992    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 03:49:21.646050    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.646170    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.646176    2088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 03:49:21.687914    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 03:49:21.687920    2088 machine.go:96] duration metric: took 420.632709ms to provisionDockerMachine
	I1028 03:49:21.687924    2088 start.go:293] postStartSetup for "functional-940000" (driver="qemu2")
	I1028 03:49:21.687930    2088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 03:49:21.687977    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 03:49:21.687984    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:21.709763    2088 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 03:49:21.711199    2088 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 03:49:21.711204    2088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/addons for local assets ...
	I1028 03:49:21.711291    2088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/files for local assets ...
	I1028 03:49:21.711427    2088 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem -> 15982.pem in /etc/ssl/certs
	I1028 03:49:21.711570    2088 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/test/nested/copy/1598/hosts -> hosts in /etc/test/nested/copy/1598
	I1028 03:49:21.711620    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1598
	I1028 03:49:21.715579    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /etc/ssl/certs/15982.pem (1708 bytes)
	I1028 03:49:21.723711    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/test/nested/copy/1598/hosts --> /etc/test/nested/copy/1598/hosts (40 bytes)
	I1028 03:49:21.732406    2088 start.go:296] duration metric: took 44.477ms for postStartSetup
	I1028 03:49:21.732418    2088 fix.go:56] duration metric: took 478.465542ms for fixHost
	I1028 03:49:21.732465    2088 main.go:141] libmachine: Using SSH client type: native
	I1028 03:49:21.732569    2088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030965f0] 0x103098e30 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1028 03:49:21.732572    2088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 03:49:21.770236    2088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730112561.819452308
	
	I1028 03:49:21.770241    2088 fix.go:216] guest clock: 1730112561.819452308
	I1028 03:49:21.770244    2088 fix.go:229] Guest: 2024-10-28 03:49:21.819452308 -0700 PDT Remote: 2024-10-28 03:49:21.732419 -0700 PDT m=+0.584825959 (delta=87.033308ms)
	I1028 03:49:21.770253    2088 fix.go:200] guest clock delta is within tolerance: 87.033308ms
	I1028 03:49:21.770255    2088 start.go:83] releasing machines lock for "functional-940000", held for 516.308792ms
	I1028 03:49:21.770596    2088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 03:49:21.770596    2088 ssh_runner.go:195] Run: cat /version.json
	I1028 03:49:21.770603    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:21.770610    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:21.792035    2088 ssh_runner.go:195] Run: systemctl --version
	I1028 03:49:21.837060    2088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 03:49:21.838910    2088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 03:49:21.838937    2088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 03:49:21.842274    2088 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 03:49:21.842278    2088 start.go:495] detecting cgroup driver to use...
	I1028 03:49:21.842351    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 03:49:21.848854    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 03:49:21.852760    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 03:49:21.856607    2088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 03:49:21.856629    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 03:49:21.860223    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 03:49:21.864269    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 03:49:21.868372    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 03:49:21.872274    2088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 03:49:21.876543    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 03:49:21.880265    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 03:49:21.884380    2088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 03:49:21.888507    2088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 03:49:21.892403    2088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 03:49:21.896421    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:21.990133    2088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 03:49:22.001103    2088 start.go:495] detecting cgroup driver to use...
	I1028 03:49:22.001170    2088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 03:49:22.008119    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 03:49:22.013701    2088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 03:49:22.020392    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 03:49:22.026328    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 03:49:22.031732    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 03:49:22.038748    2088 ssh_runner.go:195] Run: which cri-dockerd
	I1028 03:49:22.040271    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 03:49:22.043765    2088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 03:49:22.050435    2088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 03:49:22.148379    2088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 03:49:22.243898    2088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 03:49:22.243953    2088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 03:49:22.250441    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:22.353701    2088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 03:49:34.690473    2088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.336715542s)
	I1028 03:49:34.690551    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 03:49:34.697162    2088 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1028 03:49:34.704938    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 03:49:34.710806    2088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 03:49:34.798582    2088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 03:49:34.891040    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:34.975539    2088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 03:49:34.983045    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 03:49:34.988898    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:35.065727    2088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 03:49:35.096755    2088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 03:49:35.096842    2088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 03:49:35.099201    2088 start.go:563] Will wait 60s for crictl version
	I1028 03:49:35.099250    2088 ssh_runner.go:195] Run: which crictl
	I1028 03:49:35.100714    2088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 03:49:35.112329    2088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 03:49:35.112420    2088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 03:49:35.120188    2088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 03:49:35.137784    2088 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 03:49:35.137942    2088 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1028 03:49:35.143771    2088 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1028 03:49:35.147763    2088 kubeadm.go:883] updating cluster {Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 03:49:35.147830    2088 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 03:49:35.147897    2088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 03:49:35.154155    2088 docker.go:689] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-940000
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1028 03:49:35.154160    2088 docker.go:619] Images already preloaded, skipping extraction
	I1028 03:49:35.154215    2088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 03:49:35.159438    2088 docker.go:689] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-940000
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1028 03:49:35.159443    2088 cache_images.go:84] Images are preloaded, skipping loading
	I1028 03:49:35.159447    2088 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.2 docker true true} ...
	I1028 03:49:35.159499    2088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-940000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 03:49:35.159550    2088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 03:49:35.177915    2088 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1028 03:49:35.177924    2088 cni.go:84] Creating CNI manager for ""
	I1028 03:49:35.177932    2088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 03:49:35.177938    2088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 03:49:35.177947    2088 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-940000 NodeName:functional-940000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 03:49:35.178001    2088 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-940000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.105.4"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 03:49:35.178070    2088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 03:49:35.181696    2088 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 03:49:35.181737    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 03:49:35.184948    2088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 03:49:35.190956    2088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 03:49:35.196812    2088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1028 03:49:35.202838    2088 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I1028 03:49:35.204142    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:35.293823    2088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 03:49:35.300017    2088 certs.go:68] Setting up /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000 for IP: 192.168.105.4
	I1028 03:49:35.300023    2088 certs.go:194] generating shared ca certs ...
	I1028 03:49:35.300030    2088 certs.go:226] acquiring lock for ca certs: {Name:mk8f0a455373409f6ac5dde02ca67c613058d85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:49:35.300205    2088 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key
	I1028 03:49:35.300266    2088 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key
	I1028 03:49:35.300275    2088 certs.go:256] generating profile certs ...
	I1028 03:49:35.300359    2088 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.key
	I1028 03:49:35.300427    2088 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/apiserver.key.443fd431
	I1028 03:49:35.300489    2088 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/proxy-client.key
	I1028 03:49:35.300662    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem (1338 bytes)
	W1028 03:49:35.300699    2088 certs.go:480] ignoring /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598_empty.pem, impossibly tiny 0 bytes
	I1028 03:49:35.300703    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 03:49:35.300735    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem (1078 bytes)
	I1028 03:49:35.300765    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem (1123 bytes)
	I1028 03:49:35.300797    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem (1679 bytes)
	I1028 03:49:35.300859    2088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem (1708 bytes)
	I1028 03:49:35.301204    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 03:49:35.309739    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 03:49:35.318070    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 03:49:35.326598    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 03:49:35.335291    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 03:49:35.343909    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 03:49:35.352563    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 03:49:35.360669    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 03:49:35.368786    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /usr/share/ca-certificates/15982.pem (1708 bytes)
	I1028 03:49:35.377043    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 03:49:35.385343    2088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem --> /usr/share/ca-certificates/1598.pem (1338 bytes)
	I1028 03:49:35.393942    2088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 03:49:35.399743    2088 ssh_runner.go:195] Run: openssl version
	I1028 03:49:35.402009    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15982.pem && ln -fs /usr/share/ca-certificates/15982.pem /etc/ssl/certs/15982.pem"
	I1028 03:49:35.405868    2088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15982.pem
	I1028 03:49:35.407541    2088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:47 /usr/share/ca-certificates/15982.pem
	I1028 03:49:35.407567    2088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15982.pem
	I1028 03:49:35.409820    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15982.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 03:49:35.413289    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 03:49:35.416907    2088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 03:49:35.418454    2088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:40 /usr/share/ca-certificates/minikubeCA.pem
	I1028 03:49:35.418473    2088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 03:49:35.420554    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 03:49:35.424200    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1598.pem && ln -fs /usr/share/ca-certificates/1598.pem /etc/ssl/certs/1598.pem"
	I1028 03:49:35.428253    2088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1598.pem
	I1028 03:49:35.430011    2088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:47 /usr/share/ca-certificates/1598.pem
	I1028 03:49:35.430048    2088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1598.pem
	I1028 03:49:35.432060    2088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1598.pem /etc/ssl/certs/51391683.0"
	I1028 03:49:35.435823    2088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 03:49:35.437441    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 03:49:35.439374    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 03:49:35.441350    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 03:49:35.443258    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 03:49:35.445311    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 03:49:35.447296    2088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 03:49:35.449296    2088 kubeadm.go:392] StartCluster: {Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:49:35.449371    2088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 03:49:35.455398    2088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 03:49:35.459251    2088 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 03:49:35.459257    2088 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 03:49:35.459286    2088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 03:49:35.462865    2088 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 03:49:35.463226    2088 kubeconfig.go:125] found "functional-940000" server: "https://192.168.105.4:8441"
	I1028 03:49:35.464170    2088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 03:49:35.468255    2088 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1028 03:49:35.468259    2088 kubeadm.go:1160] stopping kube-system containers ...
	I1028 03:49:35.468326    2088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 03:49:35.475703    2088 docker.go:483] Stopping containers: [39fd8deedf87 eb0e5a6bd50d effd798b9372 a3a799cc5c81 590b9d6083e7 ed17182b545c 15338a6f8b02 5390d2837e3c 808a48de8b00 cdc09f4247ba 40c71e637bf7 928a83498715 4917fb563755 758d8fd91a40 14e0ef549e95 c50434193c7a 92ef6f575cc5 a1616cb5a56b a931c4241e1d a1ce3fe79051 553066c0a54e b32b5b79fbdd 2781068c10c6 a57e0e004a13 86dabac6702b 2d7c7f8252a3 1cc5aafe80d5 d8d1eefe1982]
	I1028 03:49:35.475771    2088 ssh_runner.go:195] Run: docker stop 39fd8deedf87 eb0e5a6bd50d effd798b9372 a3a799cc5c81 590b9d6083e7 ed17182b545c 15338a6f8b02 5390d2837e3c 808a48de8b00 cdc09f4247ba 40c71e637bf7 928a83498715 4917fb563755 758d8fd91a40 14e0ef549e95 c50434193c7a 92ef6f575cc5 a1616cb5a56b a931c4241e1d a1ce3fe79051 553066c0a54e b32b5b79fbdd 2781068c10c6 a57e0e004a13 86dabac6702b 2d7c7f8252a3 1cc5aafe80d5 d8d1eefe1982
	I1028 03:49:35.483305    2088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 03:49:35.591682    2088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 03:49:35.597805    2088 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct 28 10:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 28 10:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct 28 10:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Oct 28 10:48 /etc/kubernetes/scheduler.conf
	
	I1028 03:49:35.597845    2088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1028 03:49:35.602948    2088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1028 03:49:35.608033    2088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1028 03:49:35.612549    2088 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 03:49:35.612580    2088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 03:49:35.616834    2088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1028 03:49:35.620583    2088 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 03:49:35.620606    2088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 03:49:35.624331    2088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 03:49:35.628300    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:35.645696    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:36.055295    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:36.179926    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:36.215793    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:36.238348    2088 api_server.go:52] waiting for apiserver process to appear ...
	I1028 03:49:36.238441    2088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 03:49:36.740900    2088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 03:49:37.240530    2088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 03:49:37.245629    2088 api_server.go:72] duration metric: took 1.007280084s to wait for apiserver process to appear ...
	I1028 03:49:37.245635    2088 api_server.go:88] waiting for apiserver healthz status ...
	I1028 03:49:37.245648    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:39.563527    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 03:49:39.563536    2088 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 03:49:39.563541    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:39.605967    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 03:49:39.605977    2088 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 03:49:39.747753    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:39.750643    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 03:49:39.750649    2088 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 03:49:40.247749    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:40.252540    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 03:49:40.252552    2088 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 03:49:40.747738    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:40.751993    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1028 03:49:40.756488    2088 api_server.go:141] control plane version: v1.31.2
	I1028 03:49:40.756497    2088 api_server.go:131] duration metric: took 3.510847416s to wait for apiserver health ...
	I1028 03:49:40.756502    2088 cni.go:84] Creating CNI manager for ""
	I1028 03:49:40.756513    2088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 03:49:40.844599    2088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 03:49:40.847538    2088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 03:49:40.852405    2088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 03:49:40.860531    2088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 03:49:40.864801    2088 system_pods.go:59] 6 kube-system pods found
	I1028 03:49:40.864810    2088 system_pods.go:61] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 03:49:40.864813    2088 system_pods.go:61] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 03:49:40.864816    2088 system_pods.go:61] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 03:49:40.864818    2088 system_pods.go:61] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 03:49:40.864820    2088 system_pods.go:61] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 03:49:40.864822    2088 system_pods.go:61] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 03:49:40.864824    2088 system_pods.go:74] duration metric: took 4.28775ms to wait for pod list to return data ...
	I1028 03:49:40.864827    2088 node_conditions.go:102] verifying NodePressure condition ...
	I1028 03:49:40.866380    2088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 03:49:40.866386    2088 node_conditions.go:123] node cpu capacity is 2
	I1028 03:49:40.866391    2088 node_conditions.go:105] duration metric: took 1.561875ms to run NodePressure ...
	I1028 03:49:40.866398    2088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 03:49:41.091536    2088 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 03:49:41.094366    2088 kubeadm.go:739] kubelet initialised
	I1028 03:49:41.094372    2088 kubeadm.go:740] duration metric: took 2.825041ms waiting for restarted kubelet to initialise ...
	I1028 03:49:41.094377    2088 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 03:49:41.097693    2088 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:43.112361    2088 pod_ready.go:103] pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace has status "Ready":"False"
	I1028 03:49:44.613352    2088 pod_ready.go:93] pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:44.613378    2088 pod_ready.go:82] duration metric: took 3.515661542s for pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:44.613395    2088 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:46.628063    2088 pod_ready.go:103] pod "etcd-functional-940000" in "kube-system" namespace has status "Ready":"False"
	I1028 03:49:49.123390    2088 pod_ready.go:93] pod "etcd-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:49.123403    2088 pod_ready.go:82] duration metric: took 4.509982709s for pod "etcd-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:49.123415    2088 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.636353    2088 pod_ready.go:93] pod "kube-apiserver-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:50.636379    2088 pod_ready.go:82] duration metric: took 1.51294775s for pod "kube-apiserver-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.636398    2088 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hllfn" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.643808    2088 pod_ready.go:93] pod "kube-proxy-hllfn" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:50.643820    2088 pod_ready.go:82] duration metric: took 7.414291ms for pod "kube-proxy-hllfn" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.643830    2088 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.650578    2088 pod_ready.go:93] pod "kube-scheduler-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:50.650591    2088 pod_ready.go:82] duration metric: took 6.753542ms for pod "kube-scheduler-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.650603    2088 pod_ready.go:39] duration metric: took 9.556186208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 03:49:50.650623    2088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 03:49:50.664935    2088 ops.go:34] apiserver oom_adj: -16
	I1028 03:49:50.664943    2088 kubeadm.go:597] duration metric: took 15.205628041s to restartPrimaryControlPlane
	I1028 03:49:50.664949    2088 kubeadm.go:394] duration metric: took 15.215602209s to StartCluster
	I1028 03:49:50.664963    2088 settings.go:142] acquiring lock: {Name:mkb494d4e656a3be4717ac10e07a477c00ee7ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:49:50.665172    2088 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:49:50.665795    2088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:49:50.666230    2088 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 03:49:50.666245    2088 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 03:49:50.666316    2088 addons.go:69] Setting storage-provisioner=true in profile "functional-940000"
	I1028 03:49:50.666342    2088 addons.go:234] Setting addon storage-provisioner=true in "functional-940000"
	W1028 03:49:50.666348    2088 addons.go:243] addon storage-provisioner should already be in state true
	I1028 03:49:50.666367    2088 host.go:66] Checking if "functional-940000" exists ...
	I1028 03:49:50.666383    2088 addons.go:69] Setting default-storageclass=true in profile "functional-940000"
	I1028 03:49:50.666400    2088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-940000"
	I1028 03:49:50.666466    2088 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 03:49:50.668304    2088 addons.go:234] Setting addon default-storageclass=true in "functional-940000"
	W1028 03:49:50.668310    2088 addons.go:243] addon default-storageclass should already be in state true
	I1028 03:49:50.668323    2088 host.go:66] Checking if "functional-940000" exists ...
	I1028 03:49:50.670497    2088 out.go:177] * Verifying Kubernetes components...
	I1028 03:49:50.671125    2088 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 03:49:50.674155    2088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 03:49:50.674170    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:50.678398    2088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 03:49:50.682418    2088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 03:49:50.685418    2088 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 03:49:50.685423    2088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 03:49:50.685431    2088 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
	I1028 03:49:50.812154    2088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 03:49:50.819548    2088 node_ready.go:35] waiting up to 6m0s for node "functional-940000" to be "Ready" ...
	I1028 03:49:50.820189    2088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 03:49:50.821178    2088 node_ready.go:49] node "functional-940000" has status "Ready":"True"
	I1028 03:49:50.821185    2088 node_ready.go:38] duration metric: took 1.624083ms for node "functional-940000" to be "Ready" ...
	I1028 03:49:50.821188    2088 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 03:49:50.823499    2088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.825528    2088 pod_ready.go:93] pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:50.825532    2088 pod_ready.go:82] duration metric: took 2.027583ms for pod "coredns-7c65d6cfc9-nmlwl" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.825535    2088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:50.887957    2088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 03:49:51.119266    2088 pod_ready.go:93] pod "etcd-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:51.119271    2088 pod_ready.go:82] duration metric: took 293.73225ms for pod "etcd-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.119274    2088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.162785    2088 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 03:49:51.166710    2088 addons.go:510] duration metric: took 500.474208ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 03:49:51.522145    2088 pod_ready.go:93] pod "kube-apiserver-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:51.522170    2088 pod_ready.go:82] duration metric: took 402.886584ms for pod "kube-apiserver-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.522186    2088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hllfn" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.923833    2088 pod_ready.go:93] pod "kube-proxy-hllfn" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:51.923864    2088 pod_ready.go:82] duration metric: took 401.6655ms for pod "kube-proxy-hllfn" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:51.923882    2088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:52.326889    2088 pod_ready.go:93] pod "kube-scheduler-functional-940000" in "kube-system" namespace has status "Ready":"True"
	I1028 03:49:52.326920    2088 pod_ready.go:82] duration metric: took 403.02125ms for pod "kube-scheduler-functional-940000" in "kube-system" namespace to be "Ready" ...
	I1028 03:49:52.326942    2088 pod_ready.go:39] duration metric: took 1.505741542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 03:49:52.326983    2088 api_server.go:52] waiting for apiserver process to appear ...
	I1028 03:49:52.327312    2088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 03:49:52.346533    2088 api_server.go:72] duration metric: took 1.68027475s to wait for apiserver process to appear ...
	I1028 03:49:52.346548    2088 api_server.go:88] waiting for apiserver healthz status ...
	I1028 03:49:52.346566    2088 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1028 03:49:52.353514    2088 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1028 03:49:52.354416    2088 api_server.go:141] control plane version: v1.31.2
	I1028 03:49:52.354425    2088 api_server.go:131] duration metric: took 7.872ms to wait for apiserver health ...
	I1028 03:49:52.354430    2088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 03:49:52.531313    2088 system_pods.go:59] 6 kube-system pods found
	I1028 03:49:52.531354    2088 system_pods.go:61] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:52.531362    2088 system_pods.go:61] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:52.531368    2088 system_pods.go:61] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:52.531374    2088 system_pods.go:61] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:52.531379    2088 system_pods.go:61] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:52.531387    2088 system_pods.go:61] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:52.531398    2088 system_pods.go:74] duration metric: took 176.961291ms to wait for pod list to return data ...
	I1028 03:49:52.531416    2088 default_sa.go:34] waiting for default service account to be created ...
	I1028 03:49:52.725337    2088 default_sa.go:45] found service account: "default"
	I1028 03:49:52.725371    2088 default_sa.go:55] duration metric: took 193.939583ms for default service account to be created ...
	I1028 03:49:52.725387    2088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 03:49:52.930005    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:52.930039    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:52.930049    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:52.930056    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:52.930064    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:52.930070    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:52.930076    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:52.930128    2088 retry.go:31] will retry after 290.195252ms: missing components: kube-controller-manager
	I1028 03:49:53.235613    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:53.235648    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:53.235662    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:53.235668    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:53.235676    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:53.235681    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:53.235687    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:53.235714    2088 retry.go:31] will retry after 383.302622ms: missing components: kube-controller-manager
	I1028 03:49:53.633343    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:53.633373    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:53.633387    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:53.633394    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:53.633399    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:53.633405    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:53.633411    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:53.633437    2088 retry.go:31] will retry after 466.822653ms: missing components: kube-controller-manager
	I1028 03:49:54.115299    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:54.115337    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:54.115351    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:54.115357    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:54.115364    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:54.115370    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:54.115376    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:54.115406    2088 retry.go:31] will retry after 439.505374ms: missing components: kube-controller-manager
	I1028 03:49:54.573283    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:54.573317    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:54.573325    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:54.573330    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:54.573335    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:54.573339    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:54.573343    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:54.573366    2088 retry.go:31] will retry after 518.78481ms: missing components: kube-controller-manager
	I1028 03:49:55.106876    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:55.106907    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:55.106920    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:55.106927    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:55.106933    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:55.106939    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:55.106943    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:55.106971    2088 retry.go:31] will retry after 634.219295ms: missing components: kube-controller-manager
	I1028 03:49:55.751992    2088 system_pods.go:86] 6 kube-system pods found
	I1028 03:49:55.752023    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:55.752031    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:55.752035    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:55.752039    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:55.752051    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:55.752055    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:55.752082    2088 retry.go:31] will retry after 909.805144ms: missing components: kube-controller-manager
	I1028 03:49:56.668065    2088 system_pods.go:86] 7 kube-system pods found
	I1028 03:49:56.668074    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:56.668076    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:56.668078    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:56.668079    2088 system_pods.go:89] "kube-controller-manager-functional-940000" [d2fef415-2950-4f08-8dea-980e6a61a55f] Pending
	I1028 03:49:56.668081    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:56.668082    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:56.668084    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:56.668092    2088 retry.go:31] will retry after 1.150009358s: missing components: kube-controller-manager
	I1028 03:49:57.824769    2088 system_pods.go:86] 7 kube-system pods found
	I1028 03:49:57.824782    2088 system_pods.go:89] "coredns-7c65d6cfc9-nmlwl" [6ee8f59e-db27-408f-9779-967807bd186b] Running
	I1028 03:49:57.824786    2088 system_pods.go:89] "etcd-functional-940000" [a51ce7f1-4fb0-4f06-ad8a-b519d522c4c4] Running
	I1028 03:49:57.824788    2088 system_pods.go:89] "kube-apiserver-functional-940000" [cf2b9b92-9572-4a8c-b728-b686a8f03aca] Running
	I1028 03:49:57.824796    2088 system_pods.go:89] "kube-controller-manager-functional-940000" [d2fef415-2950-4f08-8dea-980e6a61a55f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 03:49:57.824799    2088 system_pods.go:89] "kube-proxy-hllfn" [ed1390bd-aeb1-46b7-9e4e-5d2956b1b205] Running
	I1028 03:49:57.824802    2088 system_pods.go:89] "kube-scheduler-functional-940000" [3f3164e2-924f-4151-93f5-fb27e5b8d48b] Running
	I1028 03:49:57.824805    2088 system_pods.go:89] "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
	I1028 03:49:57.824810    2088 system_pods.go:126] duration metric: took 5.099395291s to wait for k8s-apps to be running ...
	I1028 03:49:57.824816    2088 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 03:49:57.824978    2088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 03:49:57.835540    2088 system_svc.go:56] duration metric: took 10.706541ms WaitForService to wait for kubelet
	I1028 03:49:57.835554    2088 kubeadm.go:582] duration metric: took 7.169281084s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 03:49:57.835570    2088 node_conditions.go:102] verifying NodePressure condition ...
	I1028 03:49:57.837807    2088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 03:49:57.837812    2088 node_conditions.go:123] node cpu capacity is 2
	I1028 03:49:57.837818    2088 node_conditions.go:105] duration metric: took 2.245334ms to run NodePressure ...
	I1028 03:49:57.837825    2088 start.go:241] waiting for startup goroutines ...
	I1028 03:49:57.837828    2088 start.go:246] waiting for cluster config update ...
	I1028 03:49:57.837835    2088 start.go:255] writing updated cluster config ...
	I1028 03:49:57.838225    2088 ssh_runner.go:195] Run: rm -f paused
	I1028 03:49:57.873351    2088 start.go:600] kubectl: 1.30.2, cluster: 1.31.2 (minor skew: 1)
	I1028 03:49:57.877549    2088 out.go:177] * Done! kubectl is now configured to use "functional-940000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 28 10:50:38 functional-940000 cri-dockerd[5932]: time="2024-10-28T10:50:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/14dcde26cc4dbdd1eb500bc137f506f80318c6be894a8c15336e20e8f71b1191/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 28 10:50:38 functional-940000 cri-dockerd[5932]: time="2024-10-28T10:50:38Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Oct 28 10:50:38 functional-940000 dockerd[5678]: time="2024-10-28T10:50:38.893692765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 10:50:38 functional-940000 dockerd[5678]: time="2024-10-28T10:50:38.893742975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 10:50:38 functional-940000 dockerd[5678]: time="2024-10-28T10:50:38.893755975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:50:38 functional-940000 dockerd[5678]: time="2024-10-28T10:50:38.893798434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:50:46 functional-940000 dockerd[5678]: time="2024-10-28T10:50:46.474843644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 10:50:46 functional-940000 dockerd[5678]: time="2024-10-28T10:50:46.474912937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 10:50:46 functional-940000 dockerd[5678]: time="2024-10-28T10:50:46.474924563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:50:46 functional-940000 dockerd[5678]: time="2024-10-28T10:50:46.474965772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:50:46 functional-940000 cri-dockerd[5932]: time="2024-10-28T10:50:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cc197c0ada8812601343b1c30bccc30eb6178a308252deaca902b80ae1b49d66/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 28 10:50:47 functional-940000 cri-dockerd[5932]: time="2024-10-28T10:50:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Oct 28 10:50:47 functional-940000 dockerd[5678]: time="2024-10-28T10:50:47.926132775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 10:50:47 functional-940000 dockerd[5678]: time="2024-10-28T10:50:47.926352238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 10:50:47 functional-940000 dockerd[5678]: time="2024-10-28T10:50:47.926390655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:50:47 functional-940000 dockerd[5678]: time="2024-10-28T10:50:47.926448657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 10:50:47 functional-940000 dockerd[5672]: time="2024-10-28T10:50:47.960973614Z" level=info msg="ignoring event" container=c7b140e9f04aa7dc57ac0f21c7d60acbc83307b2341aff6d72fa0015d6e8efd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 28 10:50:47 functional-940000 dockerd[5678]: time="2024-10-28T10:50:47.961197910Z" level=info msg="shim disconnected" id=c7b140e9f04aa7dc57ac0f21c7d60acbc83307b2341aff6d72fa0015d6e8efd9 namespace=moby
	Oct 28 10:50:47 functional-940000 dockerd[5678]: time="2024-10-28T10:50:47.961250120Z" level=warning msg="cleaning up after shim disconnected" id=c7b140e9f04aa7dc57ac0f21c7d60acbc83307b2341aff6d72fa0015d6e8efd9 namespace=moby
	Oct 28 10:50:47 functional-940000 dockerd[5678]: time="2024-10-28T10:50:47.961254828Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 28 10:50:49 functional-940000 dockerd[5678]: time="2024-10-28T10:50:49.663246972Z" level=info msg="shim disconnected" id=cc197c0ada8812601343b1c30bccc30eb6178a308252deaca902b80ae1b49d66 namespace=moby
	Oct 28 10:50:49 functional-940000 dockerd[5678]: time="2024-10-28T10:50:49.663286056Z" level=warning msg="cleaning up after shim disconnected" id=cc197c0ada8812601343b1c30bccc30eb6178a308252deaca902b80ae1b49d66 namespace=moby
	Oct 28 10:50:49 functional-940000 dockerd[5678]: time="2024-10-28T10:50:49.663295348Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 28 10:50:49 functional-940000 dockerd[5672]: time="2024-10-28T10:50:49.663428976Z" level=info msg="ignoring event" container=cc197c0ada8812601343b1c30bccc30eb6178a308252deaca902b80ae1b49d66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 28 10:50:49 functional-940000 dockerd[5678]: time="2024-10-28T10:50:49.668123292Z" level=warning msg="cleanup warnings time=\"2024-10-28T10:50:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c7b140e9f04aa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 seconds ago        Exited              mount-munger              0                   cc197c0ada881       busybox-mount
	0e6df5b19b79f       nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb                         15 seconds ago       Running             myfrontend                0                   14dcde26cc4db       sp-pod
	8955cdd9ab4bc       72565bf5bbedf                                                                                         18 seconds ago       Exited              echoserver-arm            2                   d50087330ae4c       hello-node-connect-65d86f57f4-82sgl
	bfc83b3000bb5       72565bf5bbedf                                                                                         24 seconds ago       Exited              echoserver-arm            2                   c2bf76a64fc05       hello-node-64b4f8f9ff-gfcdn
	c04392e50a8ef       nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         39 seconds ago       Running             nginx                     0                   2409ecfc1286f       nginx-svc
	e1379d9874bdb       9404aea098d9e                                                                                         57 seconds ago       Running             kube-controller-manager   2                   4df7e349a8eb6       kube-controller-manager-functional-940000
	6f3f41d1b514b       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   68601b2c8889b       coredns-7c65d6cfc9-nmlwl
	f5e76a37ed6f1       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   46c1dc34e5206       storage-provisioner
	02ca52d4decb3       021d242013305                                                                                         About a minute ago   Running             kube-proxy                2                   77b9c769130ad       kube-proxy-hllfn
	6249c5bc32918       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   5fd2b7378b694       etcd-functional-940000
	96355ca0248be       d6b061e73ae45                                                                                         About a minute ago   Running             kube-scheduler            2                   1687a96d0e62e       kube-scheduler-functional-940000
	3d5f6e4b3f879       f9c26480f1e72                                                                                         About a minute ago   Running             kube-apiserver            0                   35eae2c916071       kube-apiserver-functional-940000
	39fd8deedf87f       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   a3a799cc5c81b       coredns-7c65d6cfc9-nmlwl
	eb0e5a6bd50d0       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   590b9d6083e73       storage-provisioner
	effd798b9372f       021d242013305                                                                                         About a minute ago   Exited              kube-proxy                1                   ed17182b545cf       kube-proxy-hllfn
	15338a6f8b02a       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   4917fb5637554       etcd-functional-940000
	808a48de8b008       9404aea098d9e                                                                                         About a minute ago   Exited              kube-controller-manager   1                   758d8fd91a405       kube-controller-manager-functional-940000
	cdc09f4247ba8       d6b061e73ae45                                                                                         About a minute ago   Exited              kube-scheduler            1                   928a834987153       kube-scheduler-functional-940000
	
	
	==> coredns [39fd8deedf87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50744 - 56984 "HINFO IN 9008655988936561773.4025288987151893064. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010423885s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6f3f41d1b514] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50050 - 62484 "HINFO IN 2111370520541573694.6997983717341631236. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004942411s
	[INFO] 10.244.0.1:11840 - 58551 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000094419s
	[INFO] 10.244.0.1:18117 - 56402 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000091503s
	[INFO] 10.244.0.1:36867 - 25537 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001437245s
	[INFO] 10.244.0.1:15633 - 33980 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000026043s
	[INFO] 10.244.0.1:29652 - 23891 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000060585s
	[INFO] 10.244.0.1:51486 - 63486 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000076169s
	
	
	==> describe nodes <==
	Name:               functional-940000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-940000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=functional-940000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T03_48_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 10:48:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-940000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 10:50:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 10:50:41 +0000   Mon, 28 Oct 2024 10:48:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 10:50:41 +0000   Mon, 28 Oct 2024 10:48:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 10:50:41 +0000   Mon, 28 Oct 2024 10:48:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 10:50:41 +0000   Mon, 28 Oct 2024 10:48:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-940000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 459409642a424b32848f38ff16abbdd1
	  System UUID:                459409642a424b32848f38ff16abbdd1
	  Boot ID:                    10f2f602-1e59-480d-9138-a9cae6ead9ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-gfcdn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  default                     hello-node-connect-65d86f57f4-82sgl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 coredns-7c65d6cfc9-nmlwl                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m25s
	  kube-system                 etcd-functional-940000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m31s
	  kube-system                 kube-apiserver-functional-940000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-functional-940000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-proxy-hllfn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-functional-940000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m24s                  kube-proxy       
	  Normal  Starting                 72s                    kube-proxy       
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node functional-940000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node functional-940000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s (x7 over 2m35s)  kubelet          Node functional-940000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m31s                  kubelet          Node functional-940000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m31s                  kubelet          Node functional-940000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s                  kubelet          Node functional-940000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m27s                  kubelet          Node functional-940000 status is now: NodeReady
	  Normal  RegisteredNode           2m26s                  node-controller  Node functional-940000 event: Registered Node functional-940000 in Controller
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)    kubelet          Node functional-940000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)    kubelet          Node functional-940000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 119s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     119s (x7 over 119s)    kubelet          Node functional-940000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                   node-controller  Node functional-940000 event: Registered Node functional-940000 in Controller
	  Normal  Starting                 77s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)      kubelet          Node functional-940000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)      kubelet          Node functional-940000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)      kubelet          Node functional-940000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           55s                    node-controller  Node functional-940000 event: Registered Node functional-940000 in Controller
	
	
	==> dmesg <==
	[  +4.410500] kauditd_printk_skb: 199 callbacks suppressed
	[Oct28 10:49] systemd-fstab-generator[4755]: Ignoring "noauto" option for root device
	[  +0.059571] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.721723] systemd-fstab-generator[5183]: Ignoring "noauto" option for root device
	[  +0.053055] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.105746] systemd-fstab-generator[5216]: Ignoring "noauto" option for root device
	[  +0.096701] systemd-fstab-generator[5228]: Ignoring "noauto" option for root device
	[  +0.110427] systemd-fstab-generator[5242]: Ignoring "noauto" option for root device
	[  +5.121572] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.337580] systemd-fstab-generator[5881]: Ignoring "noauto" option for root device
	[  +0.091112] systemd-fstab-generator[5893]: Ignoring "noauto" option for root device
	[  +0.086138] systemd-fstab-generator[5905]: Ignoring "noauto" option for root device
	[  +0.091194] systemd-fstab-generator[5920]: Ignoring "noauto" option for root device
	[  +0.227155] systemd-fstab-generator[6086]: Ignoring "noauto" option for root device
	[  +0.878933] systemd-fstab-generator[6209]: Ignoring "noauto" option for root device
	[  +4.425561] kauditd_printk_skb: 189 callbacks suppressed
	[ +10.195576] systemd-fstab-generator[7111]: Ignoring "noauto" option for root device
	[  +0.052824] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.850589] kauditd_printk_skb: 12 callbacks suppressed
	[Oct28 10:50] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.373025] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.035441] kauditd_printk_skb: 27 callbacks suppressed
	[ +13.673608] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.427163] kauditd_printk_skb: 1 callbacks suppressed
	[ +11.152015] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [15338a6f8b02] <==
	{"level":"info","ts":"2024-10-28T10:48:56.844757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T10:48:56.844850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-28T10:48:56.844886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T10:48:56.844904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-28T10:48:56.844931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-28T10:48:56.844969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-28T10:48:56.849491Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-940000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T10:48:56.849580Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T10:48:56.850299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T10:48:56.850497Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T10:48:56.850314Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T10:48:56.851862Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:48:56.852258Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:48:56.854427Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-28T10:48:56.854471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T10:49:22.450408Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-28T10:49:22.450434Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-940000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-28T10:49:22.450490Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T10:49:22.450531Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T10:49:22.457375Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T10:49:22.457397Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-28T10:49:22.457418Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-28T10:49:22.459200Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-28T10:49:22.459235Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-28T10:49:22.459239Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-940000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [6249c5bc3291] <==
	{"level":"info","ts":"2024-10-28T10:49:37.317907Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-10-28T10:49:37.317968Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T10:49:37.317996Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T10:49:37.319220Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:49:37.319874Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T10:49:37.322901Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-28T10:49:37.323207Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-28T10:49:37.323484Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T10:49:37.323511Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T10:49:39.088430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-28T10:49:39.088590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-28T10:49:39.088681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-28T10:49:39.088721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-28T10:49:39.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-28T10:49:39.088848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-28T10:49:39.088930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-28T10:49:39.094341Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-940000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T10:49:39.094631Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T10:49:39.094775Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T10:49:39.094647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T10:49:39.094676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T10:49:39.096788Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:49:39.096788Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T10:49:39.098598Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-28T10:49:39.099391Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:50:53 up 2 min,  0 users,  load average: 0.68, 0.40, 0.16
	Linux functional-940000 5.10.207 #1 SMP PREEMPT Tue Oct 15 16:10:02 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3d5f6e4b3f87] <==
	I1028 10:49:39.697150       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 10:49:39.697501       1 shared_informer.go:320] Caches are synced for configmaps
	I1028 10:49:39.697554       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1028 10:49:39.697593       1 aggregator.go:171] initial CRD sync complete...
	I1028 10:49:39.697610       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 10:49:39.697641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 10:49:39.697660       1 cache.go:39] Caches are synced for autoregister controller
	I1028 10:49:39.698763       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1028 10:49:39.724691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 10:49:40.598695       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1028 10:49:40.701877       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1028 10:49:40.702485       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 10:49:40.707799       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 10:49:40.952524       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 10:49:40.956457       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 10:49:40.966999       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 10:49:40.973945       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 10:49:40.975909       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 10:50:00.265929       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.127.235"}
	I1028 10:50:06.273746       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1028 10:50:06.318029       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.76.161"}
	I1028 10:50:10.193248       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.49.5"}
	I1028 10:50:20.664063       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.19.20"}
	E1028 10:50:36.156706       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49727: use of closed network connection
	E1028 10:50:44.552262       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49732: use of closed network connection
	
	
	==> kube-controller-manager [808a48de8b00] <==
	I1028 10:49:00.785896       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-940000\" does not exist"
	I1028 10:49:00.787237       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 10:49:00.801590       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 10:49:00.807285       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 10:49:00.808742       1 shared_informer.go:320] Caches are synced for namespace
	I1028 10:49:00.809617       1 shared_informer.go:320] Caches are synced for service account
	I1028 10:49:00.811984       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1028 10:49:00.832983       1 shared_informer.go:320] Caches are synced for TTL
	I1028 10:49:00.835219       1 shared_informer.go:320] Caches are synced for node
	I1028 10:49:00.835353       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1028 10:49:00.835368       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1028 10:49:00.835371       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1028 10:49:00.835373       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1028 10:49:00.835430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-940000"
	I1028 10:49:00.837169       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 10:49:00.884459       1 shared_informer.go:320] Caches are synced for GC
	I1028 10:49:00.893880       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 10:49:00.933122       1 shared_informer.go:320] Caches are synced for taint
	I1028 10:49:00.933186       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1028 10:49:00.933239       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-940000"
	I1028 10:49:00.933275       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1028 10:49:00.983020       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 10:49:01.357196       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 10:49:01.435138       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 10:49:01.435194       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [e1379d9874bd] <==
	I1028 10:49:58.753558       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 10:49:58.777215       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 10:49:58.802811       1 shared_informer.go:320] Caches are synced for HPA
	I1028 10:49:58.805032       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 10:49:59.216641       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 10:49:59.304171       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 10:49:59.304192       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 10:50:06.288545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="12.698706ms"
	I1028 10:50:06.294614       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="5.921946ms"
	I1028 10:50:06.300399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="5.734942ms"
	I1028 10:50:06.300460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="37.793µs"
	I1028 10:50:12.901691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="45.126µs"
	I1028 10:50:13.902799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="21.834µs"
	I1028 10:50:14.909577       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.709µs"
	I1028 10:50:20.630266       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="9.510158ms"
	I1028 10:50:20.645435       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="15.136177ms"
	I1028 10:50:20.645466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="11.791µs"
	I1028 10:50:22.052482       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="33.418µs"
	I1028 10:50:23.084449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="48.252µs"
	I1028 10:50:24.085795       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="23.875µs"
	I1028 10:50:30.181806       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="44.502µs"
	I1028 10:50:36.306526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="22.376µs"
	I1028 10:50:41.342238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-940000"
	I1028 10:50:42.305184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="53.626µs"
	I1028 10:50:49.289654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="48.96µs"
	
	
	==> kube-proxy [02ca52d4decb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 10:49:40.856450       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 10:49:40.904765       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1028 10:49:40.904847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 10:49:40.926998       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 10:49:40.927019       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 10:49:40.927034       1 server_linux.go:169] "Using iptables Proxier"
	I1028 10:49:40.927845       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 10:49:40.927983       1 server.go:483] "Version info" version="v1.31.2"
	I1028 10:49:40.927988       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:49:40.928902       1 config.go:199] "Starting service config controller"
	I1028 10:49:40.928948       1 config.go:328] "Starting node config controller"
	I1028 10:49:40.928974       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 10:49:40.929012       1 config.go:105] "Starting endpoint slice config controller"
	I1028 10:49:40.929032       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 10:49:40.931670       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 10:49:40.931745       1 shared_informer.go:320] Caches are synced for service config
	I1028 10:49:41.029350       1 shared_informer.go:320] Caches are synced for node config
	I1028 10:49:41.029343       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [effd798b9372] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 10:48:58.720778       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 10:48:58.731500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1028 10:48:58.731536       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 10:48:58.750529       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 10:48:58.750552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 10:48:58.750567       1 server_linux.go:169] "Using iptables Proxier"
	I1028 10:48:58.751283       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 10:48:58.751404       1 server.go:483] "Version info" version="v1.31.2"
	I1028 10:48:58.751412       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:48:58.751913       1 config.go:199] "Starting service config controller"
	I1028 10:48:58.751927       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 10:48:58.751939       1 config.go:105] "Starting endpoint slice config controller"
	I1028 10:48:58.751995       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 10:48:58.752189       1 config.go:328] "Starting node config controller"
	I1028 10:48:58.752196       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 10:48:58.853014       1 shared_informer.go:320] Caches are synced for node config
	I1028 10:48:58.853035       1 shared_informer.go:320] Caches are synced for service config
	I1028 10:48:58.853048       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [96355ca0248b] <==
	I1028 10:49:37.544473       1 serving.go:386] Generated self-signed cert in-memory
	W1028 10:49:39.616327       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 10:49:39.616428       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 10:49:39.616471       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 10:49:39.616490       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 10:49:39.631831       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 10:49:39.631934       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:49:39.632995       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 10:49:39.634915       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 10:49:39.634977       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 10:49:39.635004       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 10:49:39.735860       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cdc09f4247ba] <==
	I1028 10:48:55.284205       1 serving.go:386] Generated self-signed cert in-memory
	W1028 10:48:57.374678       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 10:48:57.374718       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 10:48:57.374739       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 10:48:57.374746       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 10:48:57.409340       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 10:48:57.409356       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:48:57.410259       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 10:48:57.415476       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 10:48:57.415702       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 10:48:57.415753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 10:48:57.516619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 10:49:22.440059       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1028 10:49:22.440082       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1028 10:49:22.440150       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 28 10:50:36 functional-940000 kubelet[6216]: I1028 10:50:36.517082    6216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/bac02e96-1a17-40e2-bc1d-240d9e7ea322-pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7\") pod \"bac02e96-1a17-40e2-bc1d-240d9e7ea322\" (UID: \"bac02e96-1a17-40e2-bc1d-240d9e7ea322\") "
	Oct 28 10:50:36 functional-940000 kubelet[6216]: I1028 10:50:36.517117    6216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bac02e96-1a17-40e2-bc1d-240d9e7ea322-pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7" (OuterVolumeSpecName: "mypd") pod "bac02e96-1a17-40e2-bc1d-240d9e7ea322" (UID: "bac02e96-1a17-40e2-bc1d-240d9e7ea322"). InnerVolumeSpecName "pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 28 10:50:36 functional-940000 kubelet[6216]: I1028 10:50:36.520656    6216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac02e96-1a17-40e2-bc1d-240d9e7ea322-kube-api-access-698l8" (OuterVolumeSpecName: "kube-api-access-698l8") pod "bac02e96-1a17-40e2-bc1d-240d9e7ea322" (UID: "bac02e96-1a17-40e2-bc1d-240d9e7ea322"). InnerVolumeSpecName "kube-api-access-698l8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 28 10:50:36 functional-940000 kubelet[6216]: I1028 10:50:36.618081    6216 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-698l8\" (UniqueName: \"kubernetes.io/projected/bac02e96-1a17-40e2-bc1d-240d9e7ea322-kube-api-access-698l8\") on node \"functional-940000\" DevicePath \"\""
	Oct 28 10:50:36 functional-940000 kubelet[6216]: I1028 10:50:36.618103    6216 reconciler_common.go:288] "Volume detached for volume \"pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7\" (UniqueName: \"kubernetes.io/host-path/bac02e96-1a17-40e2-bc1d-240d9e7ea322-pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7\") on node \"functional-940000\" DevicePath \"\""
	Oct 28 10:50:37 functional-940000 kubelet[6216]: E1028 10:50:37.441479    6216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bac02e96-1a17-40e2-bc1d-240d9e7ea322" containerName="myfrontend"
	Oct 28 10:50:37 functional-940000 kubelet[6216]: I1028 10:50:37.441685    6216 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac02e96-1a17-40e2-bc1d-240d9e7ea322" containerName="myfrontend"
	Oct 28 10:50:37 functional-940000 kubelet[6216]: I1028 10:50:37.632806    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7\" (UniqueName: \"kubernetes.io/host-path/7d24d8b6-8684-4ef8-be5f-99aea7bedfd3-pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7\") pod \"sp-pod\" (UID: \"7d24d8b6-8684-4ef8-be5f-99aea7bedfd3\") " pod="default/sp-pod"
	Oct 28 10:50:37 functional-940000 kubelet[6216]: I1028 10:50:37.632847    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmvp\" (UniqueName: \"kubernetes.io/projected/7d24d8b6-8684-4ef8-be5f-99aea7bedfd3-kube-api-access-rfmvp\") pod \"sp-pod\" (UID: \"7d24d8b6-8684-4ef8-be5f-99aea7bedfd3\") " pod="default/sp-pod"
	Oct 28 10:50:38 functional-940000 kubelet[6216]: I1028 10:50:38.283382    6216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac02e96-1a17-40e2-bc1d-240d9e7ea322" path="/var/lib/kubelet/pods/bac02e96-1a17-40e2-bc1d-240d9e7ea322/volumes"
	Oct 28 10:50:39 functional-940000 kubelet[6216]: I1028 10:50:39.395153    6216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.725551657 podStartE2EDuration="2.39513097s" podCreationTimestamp="2024-10-28 10:50:37 +0000 UTC" firstStartedPulling="2024-10-28 10:50:38.184794935 +0000 UTC m=+61.958990536" lastFinishedPulling="2024-10-28 10:50:38.854374207 +0000 UTC m=+62.628569849" observedRunningTime="2024-10-28 10:50:39.394683335 +0000 UTC m=+63.168879019" watchObservedRunningTime="2024-10-28 10:50:39.39513097 +0000 UTC m=+63.169326655"
	Oct 28 10:50:42 functional-940000 kubelet[6216]: I1028 10:50:42.280483    6216 scope.go:117] "RemoveContainer" containerID="bfc83b3000bb505d579e776f21ea554ff3c45f5fc0afd2475361b12bca465a1e"
	Oct 28 10:50:42 functional-940000 kubelet[6216]: E1028 10:50:42.280891    6216 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-gfcdn_default(4ef783a6-1e1d-4c4d-a4b9-e555cff18ccd)\"" pod="default/hello-node-64b4f8f9ff-gfcdn" podUID="4ef783a6-1e1d-4c4d-a4b9-e555cff18ccd"
	Oct 28 10:50:46 functional-940000 kubelet[6216]: I1028 10:50:46.323159    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd-test-volume\") pod \"busybox-mount\" (UID: \"c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd\") " pod="default/busybox-mount"
	Oct 28 10:50:46 functional-940000 kubelet[6216]: I1028 10:50:46.323190    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntgg\" (UniqueName: \"kubernetes.io/projected/c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd-kube-api-access-bntgg\") pod \"busybox-mount\" (UID: \"c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd\") " pod="default/busybox-mount"
	Oct 28 10:50:46 functional-940000 kubelet[6216]: I1028 10:50:46.516562    6216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc197c0ada8812601343b1c30bccc30eb6178a308252deaca902b80ae1b49d66"
	Oct 28 10:50:49 functional-940000 kubelet[6216]: I1028 10:50:49.280828    6216 scope.go:117] "RemoveContainer" containerID="8955cdd9ab4bc4e217f8ab0788d0364d08253fe0868e77dfebe3fcc105922fb0"
	Oct 28 10:50:49 functional-940000 kubelet[6216]: E1028 10:50:49.281036    6216 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-82sgl_default(94f2800c-2794-4739-b7fc-7a902692f7fc)\"" pod="default/hello-node-connect-65d86f57f4-82sgl" podUID="94f2800c-2794-4739-b7fc-7a902692f7fc"
	Oct 28 10:50:49 functional-940000 kubelet[6216]: I1028 10:50:49.854104    6216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntgg\" (UniqueName: \"kubernetes.io/projected/c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd-kube-api-access-bntgg\") pod \"c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd\" (UID: \"c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd\") "
	Oct 28 10:50:49 functional-940000 kubelet[6216]: I1028 10:50:49.854126    6216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd-test-volume\") pod \"c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd\" (UID: \"c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd\") "
	Oct 28 10:50:49 functional-940000 kubelet[6216]: I1028 10:50:49.854171    6216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd-test-volume" (OuterVolumeSpecName: "test-volume") pod "c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd" (UID: "c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 28 10:50:49 functional-940000 kubelet[6216]: I1028 10:50:49.857467    6216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd-kube-api-access-bntgg" (OuterVolumeSpecName: "kube-api-access-bntgg") pod "c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd" (UID: "c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd"). InnerVolumeSpecName "kube-api-access-bntgg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 28 10:50:49 functional-940000 kubelet[6216]: I1028 10:50:49.954608    6216 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bntgg\" (UniqueName: \"kubernetes.io/projected/c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd-kube-api-access-bntgg\") on node \"functional-940000\" DevicePath \"\""
	Oct 28 10:50:49 functional-940000 kubelet[6216]: I1028 10:50:49.954633    6216 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd-test-volume\") on node \"functional-940000\" DevicePath \"\""
	Oct 28 10:50:50 functional-940000 kubelet[6216]: I1028 10:50:50.588822    6216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc197c0ada8812601343b1c30bccc30eb6178a308252deaca902b80ae1b49d66"
	
	
	==> storage-provisioner [eb0e5a6bd50d] <==
	I1028 10:48:58.697853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 10:48:58.704167       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 10:48:58.704285       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 10:48:58.708270       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 10:48:58.708500       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d80c73e-ab9d-4477-83ba-bc4a70734202", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-940000_d95c716e-23a1-4b89-aa56-48e1108aafa3 became leader
	I1028 10:48:58.708514       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-940000_d95c716e-23a1-4b89-aa56-48e1108aafa3!
	I1028 10:48:58.809168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-940000_d95c716e-23a1-4b89-aa56-48e1108aafa3!
	
	
	==> storage-provisioner [f5e76a37ed6f] <==
	I1028 10:49:40.807808       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 10:49:40.816549       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 10:49:40.816566       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 10:49:58.308932       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 10:49:58.309060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d80c73e-ab9d-4477-83ba-bc4a70734202", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-940000_c31cbe99-8e75-4a10-813c-6c40cc60aaa7 became leader
	I1028 10:49:58.309074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-940000_c31cbe99-8e75-4a10-813c-6c40cc60aaa7!
	I1028 10:49:58.409720       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-940000_c31cbe99-8e75-4a10-813c-6c40cc60aaa7!
	I1028 10:50:23.963869       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1028 10:50:23.963896       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    886bd18c-8e70-42ed-95f1-023b31083a2c 342 0 2024-10-28 10:48:27 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-28 10:48:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  4eac864c-3a3b-449d-a0ba-1fc04aa301a7 755 0 2024-10-28 10:50:23 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-28 10:50:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-28 10:50:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1028 10:50:23.964526       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7" provisioned
	I1028 10:50:23.964539       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1028 10:50:23.964543       1 volume_store.go:212] Trying to save persistentvolume "pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7"
	I1028 10:50:23.965239       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"4eac864c-3a3b-449d-a0ba-1fc04aa301a7", APIVersion:"v1", ResourceVersion:"755", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1028 10:50:23.969769       1 volume_store.go:219] persistentvolume "pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7" saved
	I1028 10:50:23.970191       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"4eac864c-3a3b-449d-a0ba-1fc04aa301a7", APIVersion:"v1", ResourceVersion:"755", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4eac864c-3a3b-449d-a0ba-1fc04aa301a7
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-940000 -n functional-940000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-940000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-940000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-940000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-940000/192.168.105.4
	Start Time:       Mon, 28 Oct 2024 03:50:46 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://c7b140e9f04aa7dc57ac0f21c7d60acbc83307b2341aff6d72fa0015d6e8efd9
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 28 Oct 2024 03:50:47 -0700
	      Finished:     Mon, 28 Oct 2024 03:50:47 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bntgg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-bntgg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/busybox-mount to functional-940000
	  Normal  Pulling    7s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.363s (1.363s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (33.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (725.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-921000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1028 03:51:26.782150    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:53:42.897888    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:54:10.624919    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:06.271908    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:06.279614    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:06.293038    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:06.314439    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:06.357908    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:06.441369    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:06.604851    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:06.928343    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:07.571539    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:08.855276    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:11.418623    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:16.541305    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:26.785095    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:55:47.267944    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:56:28.231144    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:57:50.155312    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:58:42.898895    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:00:06.274601    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:00:33.999495    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-921000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 52 (12m5.305923875s)

                                                
                                                
-- stdout --
	* [ha-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-921000" primary control-plane node in "ha-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Deleting "ha-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 03:51:03.092095    2504 out.go:345] Setting OutFile to fd 1 ...
	I1028 03:51:03.092264    2504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:51:03.092268    2504 out.go:358] Setting ErrFile to fd 2...
	I1028 03:51:03.092271    2504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:51:03.092402    2504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 03:51:03.093693    2504 out.go:352] Setting JSON to false
	I1028 03:51:03.114583    2504 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1234,"bootTime":1730111429,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 03:51:03.114681    2504 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 03:51:03.121268    2504 out.go:177] * [ha-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 03:51:03.125241    2504 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 03:51:03.125247    2504 notify.go:220] Checking for updates...
	I1028 03:51:03.132113    2504 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:51:03.135209    2504 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 03:51:03.138243    2504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 03:51:03.141122    2504 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 03:51:03.144233    2504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 03:51:03.147440    2504 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 03:51:03.150151    2504 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 03:51:03.157265    2504 start.go:297] selected driver: qemu2
	I1028 03:51:03.157273    2504 start.go:901] validating driver "qemu2" against <nil>
	I1028 03:51:03.157280    2504 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 03:51:03.160059    2504 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 03:51:03.164180    2504 out.go:177] * Automatically selected the socket_vmnet network
	I1028 03:51:03.168250    2504 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 03:51:03.168272    2504 cni.go:84] Creating CNI manager for ""
	I1028 03:51:03.168297    2504 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 03:51:03.168301    2504 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 03:51:03.168331    2504 start.go:340] cluster config:
	{Name:ha-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:51:03.173237    2504 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 03:51:03.179284    2504 out.go:177] * Starting "ha-921000" primary control-plane node in "ha-921000" cluster
	I1028 03:51:03.183199    2504 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 03:51:03.183222    2504 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 03:51:03.183231    2504 cache.go:56] Caching tarball of preloaded images
	I1028 03:51:03.183295    2504 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 03:51:03.183305    2504 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 03:51:03.183480    2504 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/ha-921000/config.json ...
	I1028 03:51:03.183490    2504 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/ha-921000/config.json: {Name:mk1d50744db4be72017222b3da12fd32a8ad6ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:51:03.183781    2504 start.go:360] acquireMachinesLock for ha-921000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 03:51:03.183830    2504 start.go:364] duration metric: took 44.709µs to acquireMachinesLock for "ha-921000"
	I1028 03:51:03.183841    2504 start.go:93] Provisioning new machine with config: &{Name:ha-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 03:51:03.183872    2504 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 03:51:03.188206    2504 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 03:51:03.210712    2504 start.go:159] libmachine.API.Create for "ha-921000" (driver="qemu2")
	I1028 03:51:03.210743    2504 client.go:168] LocalClient.Create starting
	I1028 03:51:03.210831    2504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 03:51:03.210870    2504 main.go:141] libmachine: Decoding PEM data...
	I1028 03:51:03.210882    2504 main.go:141] libmachine: Parsing certificate...
	I1028 03:51:03.210927    2504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 03:51:03.210956    2504 main.go:141] libmachine: Decoding PEM data...
	I1028 03:51:03.210964    2504 main.go:141] libmachine: Parsing certificate...
	I1028 03:51:03.211404    2504 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 03:51:03.425572    2504 main.go:141] libmachine: Creating SSH key...
	I1028 03:51:03.456173    2504 main.go:141] libmachine: Creating Disk image...
	I1028 03:51:03.456178    2504 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 03:51:03.456367    2504 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2
	I1028 03:51:03.468259    2504 main.go:141] libmachine: STDOUT: 
	I1028 03:51:03.468289    2504 main.go:141] libmachine: STDERR: 
	I1028 03:51:03.468344    2504 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2 +20000M
	I1028 03:51:03.476791    2504 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 03:51:03.476805    2504 main.go:141] libmachine: STDERR: 
	I1028 03:51:03.476825    2504 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2
	I1028 03:51:03.476832    2504 main.go:141] libmachine: Starting QEMU VM...
	I1028 03:51:03.476842    2504 qemu.go:418] Using hvf for hardware acceleration
	I1028 03:51:03.476875    2504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:b8:24:f2:08:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2
	I1028 03:51:03.516159    2504 main.go:141] libmachine: STDOUT: 
	I1028 03:51:03.516187    2504 main.go:141] libmachine: STDERR: 
	I1028 03:51:03.516191    2504 main.go:141] libmachine: Attempt 0
	I1028 03:51:03.516224    2504 main.go:141] libmachine: Searching for 0e:b8:24:f2:08:2c in /var/db/dhcpd_leases ...
	I1028 03:51:03.516333    2504 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1028 03:51:03.516350    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:51:03.516358    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:51:03.516364    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:51:03.516371    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:51:05.518581    2504 main.go:141] libmachine: Attempt 1
	I1028 03:51:05.518670    2504 main.go:141] libmachine: Searching for 0e:b8:24:f2:08:2c in /var/db/dhcpd_leases ...
	I1028 03:51:05.519085    2504 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1028 03:51:05.519152    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:51:05.519191    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:51:05.519222    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:51:05.519252    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:51:07.521481    2504 main.go:141] libmachine: Attempt 2
	I1028 03:51:07.521625    2504 main.go:141] libmachine: Searching for 0e:b8:24:f2:08:2c in /var/db/dhcpd_leases ...
	I1028 03:51:07.521946    2504 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1028 03:51:07.522001    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:51:07.522033    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:51:07.522064    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:51:07.522094    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:51:09.523610    2504 main.go:141] libmachine: Attempt 3
	I1028 03:51:09.523637    2504 main.go:141] libmachine: Searching for 0e:b8:24:f2:08:2c in /var/db/dhcpd_leases ...
	I1028 03:51:09.523716    2504 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1028 03:51:09.523729    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:51:09.523735    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:51:09.523740    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:51:09.523746    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:51:11.525839    2504 main.go:141] libmachine: Attempt 4
	I1028 03:51:11.525871    2504 main.go:141] libmachine: Searching for 0e:b8:24:f2:08:2c in /var/db/dhcpd_leases ...
	I1028 03:51:11.525956    2504 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1028 03:51:11.525981    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:51:11.525988    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:51:11.525993    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:51:11.526001    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:51:13.528084    2504 main.go:141] libmachine: Attempt 5
	I1028 03:51:13.528101    2504 main.go:141] libmachine: Searching for 0e:b8:24:f2:08:2c in /var/db/dhcpd_leases ...
	I1028 03:51:13.528163    2504 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1028 03:51:13.528171    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:51:13.528181    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:51:13.528188    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:51:13.528196    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:51:15.530284    2504 main.go:141] libmachine: Attempt 6
	I1028 03:51:15.530328    2504 main.go:141] libmachine: Searching for 0e:b8:24:f2:08:2c in /var/db/dhcpd_leases ...
	I1028 03:51:15.530400    2504 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1028 03:51:15.530410    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:51:15.530415    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:51:15.530420    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:51:15.530425    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:51:17.532533    2504 main.go:141] libmachine: Attempt 7
	I1028 03:51:17.532587    2504 main.go:141] libmachine: Searching for 0e:b8:24:f2:08:2c in /var/db/dhcpd_leases ...
	I1028 03:51:17.532752    2504 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1028 03:51:17.532765    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:0e:b8:24:f2:08:2c ID:1,e:b8:24:f2:8:2c Lease:0x671f7ab3}
	I1028 03:51:17.532769    2504 main.go:141] libmachine: Found match: 0e:b8:24:f2:08:2c
	I1028 03:51:17.532783    2504 main.go:141] libmachine: IP: 192.168.105.5
	I1028 03:51:17.532788    2504 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1028 03:57:03.214303    2504 start.go:128] duration metric: took 6m0.029111958s to createHost
	I1028 03:57:03.214375    2504 start.go:83] releasing machines lock for "ha-921000", held for 6m0.029261375s
	W1028 03:57:03.214424    2504 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I1028 03:57:03.223191    2504 out.go:177] * Deleting "ha-921000" in qemu2 ...
	W1028 03:57:03.267914    2504 out.go:270] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1028 03:57:03.267950    2504 start.go:729] Will try again in 5 seconds ...
	I1028 03:57:08.270139    2504 start.go:360] acquireMachinesLock for ha-921000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 03:57:08.270709    2504 start.go:364] duration metric: took 472.084µs to acquireMachinesLock for "ha-921000"
	I1028 03:57:08.270880    2504 start.go:93] Provisioning new machine with config: &{Name:ha-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 03:57:08.271152    2504 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 03:57:08.275819    2504 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 03:57:08.325401    2504 start.go:159] libmachine.API.Create for "ha-921000" (driver="qemu2")
	I1028 03:57:08.325464    2504 client.go:168] LocalClient.Create starting
	I1028 03:57:08.325627    2504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 03:57:08.325699    2504 main.go:141] libmachine: Decoding PEM data...
	I1028 03:57:08.325721    2504 main.go:141] libmachine: Parsing certificate...
	I1028 03:57:08.325796    2504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 03:57:08.325854    2504 main.go:141] libmachine: Decoding PEM data...
	I1028 03:57:08.325870    2504 main.go:141] libmachine: Parsing certificate...
	I1028 03:57:08.326604    2504 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 03:57:08.499908    2504 main.go:141] libmachine: Creating SSH key...
	I1028 03:57:08.597111    2504 main.go:141] libmachine: Creating Disk image...
	I1028 03:57:08.597117    2504 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 03:57:08.597323    2504 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2
	I1028 03:57:08.607162    2504 main.go:141] libmachine: STDOUT: 
	I1028 03:57:08.607182    2504 main.go:141] libmachine: STDERR: 
	I1028 03:57:08.607242    2504 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2 +20000M
	I1028 03:57:08.615738    2504 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 03:57:08.615751    2504 main.go:141] libmachine: STDERR: 
	I1028 03:57:08.615764    2504 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2
	I1028 03:57:08.615768    2504 main.go:141] libmachine: Starting QEMU VM...
	I1028 03:57:08.615776    2504 qemu.go:418] Using hvf for hardware acceleration
	I1028 03:57:08.615835    2504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:0c:76:dc:d9:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2
	I1028 03:57:08.652336    2504 main.go:141] libmachine: STDOUT: 
	I1028 03:57:08.652356    2504 main.go:141] libmachine: STDERR: 
	I1028 03:57:08.652360    2504 main.go:141] libmachine: Attempt 0
	I1028 03:57:08.652384    2504 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 03:57:08.652488    2504 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1028 03:57:08.652500    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:0e:b8:24:f2:08:2c ID:1,e:b8:24:f2:8:2c Lease:0x671f7ab3}
	I1028 03:57:08.652509    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:57:08.652515    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:57:08.652521    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:57:08.652528    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:57:10.654709    2504 main.go:141] libmachine: Attempt 1
	I1028 03:57:10.654831    2504 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 03:57:10.655265    2504 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1028 03:57:10.655322    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:0e:b8:24:f2:08:2c ID:1,e:b8:24:f2:8:2c Lease:0x671f7ab3}
	I1028 03:57:10.655356    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:57:10.655388    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:57:10.655418    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:57:10.655448    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:57:12.657667    2504 main.go:141] libmachine: Attempt 2
	I1028 03:57:12.658026    2504 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 03:57:12.658488    2504 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1028 03:57:12.658553    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:0e:b8:24:f2:08:2c ID:1,e:b8:24:f2:8:2c Lease:0x671f7ab3}
	I1028 03:57:12.658592    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:57:12.658625    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:57:12.658659    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:57:12.658702    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:57:14.659341    2504 main.go:141] libmachine: Attempt 3
	I1028 03:57:14.659378    2504 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 03:57:14.659502    2504 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1028 03:57:14.659515    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:0e:b8:24:f2:08:2c ID:1,e:b8:24:f2:8:2c Lease:0x671f7ab3}
	I1028 03:57:14.659521    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:57:14.659526    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:57:14.659532    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:57:14.659537    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:57:16.661587    2504 main.go:141] libmachine: Attempt 4
	I1028 03:57:16.661603    2504 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 03:57:16.661658    2504 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1028 03:57:16.661667    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:0e:b8:24:f2:08:2c ID:1,e:b8:24:f2:8:2c Lease:0x671f7ab3}
	I1028 03:57:16.661672    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:57:16.661678    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:57:16.661683    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:57:16.661687    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:57:18.663733    2504 main.go:141] libmachine: Attempt 5
	I1028 03:57:18.663763    2504 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 03:57:18.663852    2504 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1028 03:57:18.663864    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:0e:b8:24:f2:08:2c ID:1,e:b8:24:f2:8:2c Lease:0x671f7ab3}
	I1028 03:57:18.663869    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:57:18.663881    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:57:18.663890    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:57:18.663896    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:57:20.665970    2504 main.go:141] libmachine: Attempt 6
	I1028 03:57:20.665989    2504 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 03:57:20.666088    2504 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1028 03:57:20.666098    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:0e:b8:24:f2:08:2c ID:1,e:b8:24:f2:8:2c Lease:0x671f7ab3}
	I1028 03:57:20.666102    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ca:73:95:55:52:92 ID:1,ca:73:95:55:52:92 Lease:0x671f79f3}
	I1028 03:57:20.666108    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1e:2d:98:94:f0:35 ID:1,1e:2d:98:94:f0:35 Lease:0x671f6ba1}
	I1028 03:57:20.666116    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:ae:50:16:39:de ID:1,ae:ae:50:16:39:de Lease:0x671f6b78}
	I1028 03:57:20.666121    2504 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671f75e6}
	I1028 03:57:22.668213    2504 main.go:141] libmachine: Attempt 7
	I1028 03:57:22.668247    2504 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 03:57:22.668336    2504 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1028 03:57:22.668349    2504 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:f2:0c:76:dc:d9:10 ID:1,f2:c:76:dc:d9:10 Lease:0x671f7c21}
	I1028 03:57:22.668354    2504 main.go:141] libmachine: Found match: f2:0c:76:dc:d9:10
	I1028 03:57:22.668361    2504 main.go:141] libmachine: IP: 192.168.105.6
	I1028 03:57:22.668367    2504 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1028 04:03:08.305472    2504 start.go:128] duration metric: took 6m0.056542208s to createHost
	I1028 04:03:08.305567    2504 start.go:83] releasing machines lock for "ha-921000", held for 6m0.057081959s
	W1028 04:03:08.305859    2504 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-921000" may fix it: creating host: create host timed out in 360.000000 seconds
	* Failed to start qemu2 VM. Running "minikube delete -p ha-921000" may fix it: creating host: create host timed out in 360.000000 seconds
	I1028 04:03:08.313399    2504 out.go:201] 
	W1028 04:03:08.317474    2504 out.go:270] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	W1028 04:03:08.317563    2504 out.go:270] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1028 04:03:08.317696    2504 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1028 04:03:08.328419    2504 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-921000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (74.200625ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:03:08.419968    3326 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:03:08.419974    3326 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StartCluster (725.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (90.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (63.238583ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-921000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- rollout status deployment/busybox: exit status 1 (63.518875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.689792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:03:08.609707    1598 retry.go:31] will retry after 1.301376189s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.165792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:03:10.022588    1598 retry.go:31] will retry after 2.211683843s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.389166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:03:12.345716    1598 retry.go:31] will retry after 2.235427702s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.433458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:03:14.692030    1598 retry.go:31] will retry after 4.972774705s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.101166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:03:19.774637    1598 retry.go:31] will retry after 4.17301275s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.329208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:03:24.056224    1598 retry.go:31] will retry after 10.382596561s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.074416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:03:34.551437    1598 retry.go:31] will retry after 13.058257957s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1028 04:03:42.876423    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.556583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:03:47.722694    1598 retry.go:31] will retry after 16.621417719s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.486875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:04:04.452010    1598 retry.go:31] will retry after 13.499066789s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.81025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:04:18.062357    1598 retry.go:31] will retry after 20.25339104s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.002334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.871292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.204292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.458083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.743875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (35.456125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:38.707741    3449 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:38.707751    3449 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DeployApp (90.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-921000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.756ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-921000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (35.240375ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:38.805165    3454 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:38.805171    3454 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-921000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-921000 -v=7 --alsologtostderr: exit status 50 (53.601917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:04:38.838796    3456 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:04:38.839073    3456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:04:38.839076    3456 out.go:358] Setting ErrFile to fd 2...
	I1028 04:04:38.839078    3456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:04:38.839227    3456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:04:38.839476    3456 mustload.go:65] Loading cluster: ha-921000
	I1028 04:04:38.839695    3456 config.go:182] Loaded profile config "ha-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:04:38.840341    3456 host.go:66] Checking if "ha-921000" exists ...
	I1028 04:04:38.845046    3456 out.go:201] 
	W1028 04:04:38.850026    3456 out.go:270] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-921000 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-921000 endpoint: failed to lookup ip for ""
	W1028 04:04:38.850048    3456 out.go:270] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I1028 04:04:38.854956    3456 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-921000 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (34.773542ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:38.893942    3458 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:38.893951    3458 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-921000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-921000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.996125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-921000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-921000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-921000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (35.752375ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:38.957067    3461 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:38.957072    3461 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-921000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-921000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-921000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-921000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-921000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-921000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-921000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-921000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (34.805459ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:39.045831    3466 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:39.045839    3466 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-921000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-921000 node stop m02 -v=7 --alsologtostderr: exit status 85 (52.218916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:04:39.115828    3470 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:04:39.116137    3470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:04:39.116140    3470 out.go:358] Setting ErrFile to fd 2...
	I1028 04:04:39.116143    3470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:04:39.116258    3470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:04:39.116532    3470 mustload.go:65] Loading cluster: ha-921000
	I1028 04:04:39.116740    3470 config.go:182] Loaded profile config "ha-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:04:39.120920    3470 out.go:201] 
	W1028 04:04:39.123970    3470 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1028 04:04:39.123976    3470 out.go:270] * 
	* 
	W1028 04:04:39.125451    3470 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:04:39.129745    3470 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-921000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (35.179917ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:39.204633    3474 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:39.204641    3474 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-921000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-921000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-921000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-921000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (34.27425ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:39.291210    3479 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:39.291218    3479 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-921000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-921000 node start m02 -v=7 --alsologtostderr: exit status 85 (51.550042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:04:39.324528    3481 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:04:39.324795    3481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:04:39.324798    3481 out.go:358] Setting ErrFile to fd 2...
	I1028 04:04:39.324801    3481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:04:39.324946    3481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:04:39.325183    3481 mustload.go:65] Loading cluster: ha-921000
	I1028 04:04:39.325369    3481 config.go:182] Loaded profile config "ha-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:04:39.329966    3481 out.go:201] 
	W1028 04:04:39.332930    3481 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1028 04:04:39.332935    3481 out.go:270] * 
	* 
	W1028 04:04:39.334344    3481 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:04:39.338909    3481 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1028 04:04:39.324528    3481 out.go:345] Setting OutFile to fd 1 ...
I1028 04:04:39.324795    3481 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:04:39.324798    3481 out.go:358] Setting ErrFile to fd 2...
I1028 04:04:39.324801    3481 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:04:39.324946    3481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
I1028 04:04:39.325183    3481 mustload.go:65] Loading cluster: ha-921000
I1028 04:04:39.325369    3481 config.go:182] Loaded profile config "ha-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:04:39.329966    3481 out.go:201] 
W1028 04:04:39.332930    3481 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1028 04:04:39.332935    3481 out.go:270] * 
* 
W1028 04:04:39.334344    3481 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 04:04:39.338909    3481 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-921000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-921000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (33.689041ms)

                                                
                                                
** stderr ** 
	E1028 04:04:39.408796    3485 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1028 04:04:39.409566    3485 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1028 04:04:39.410322    3485 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1028 04:04:39.410922    3485 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1028 04:04:39.411741    3485 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (35.38125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:39.447310    3486 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:39.447319    3486 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-921000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-921000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-921000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-921000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-921000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-921000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-921000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-921000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (35.282834ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:04:39.535457    3491 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:04:39.535465    3491 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (983.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-921000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-921000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-921000 -v=7 --alsologtostderr: (4.912347333s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-921000 --wait=true -v=7 --alsologtostderr
E1028 04:05:05.965210    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:05:06.251413    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:08:42.875813    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:10:06.251383    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:11:29.339849    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:13:42.875729    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:15:06.251289    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:18:42.908836    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:20:06.284804    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-921000 --wait=true -v=7 --alsologtostderr: signal: killed (16m18.555347875s)

                                                
                                                
-- stdout --
	* [ha-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-921000" primary control-plane node in "ha-921000" cluster
	* Restarting existing qemu2 VM for "ha-921000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:04:44.557241    3510 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:04:44.557438    3510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:04:44.557442    3510 out.go:358] Setting ErrFile to fd 2...
	I1028 04:04:44.557445    3510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:04:44.557628    3510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:04:44.558887    3510 out.go:352] Setting JSON to false
	I1028 04:04:44.578704    3510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2055,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:04:44.578773    3510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:04:44.583922    3510 out.go:177] * [ha-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:04:44.591689    3510 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:04:44.591746    3510 notify.go:220] Checking for updates...
	I1028 04:04:44.597890    3510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:04:44.599146    3510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:04:44.601823    3510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:04:44.604874    3510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:04:44.607921    3510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:04:44.611204    3510 config.go:182] Loaded profile config "ha-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:04:44.611256    3510 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:04:44.615787    3510 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:04:44.622856    3510 start.go:297] selected driver: qemu2
	I1028 04:04:44.622864    3510 start.go:901] validating driver "qemu2" against &{Name:ha-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:04:44.622915    3510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:04:44.625415    3510 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:04:44.625440    3510 cni.go:84] Creating CNI manager for ""
	I1028 04:04:44.625467    3510 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 04:04:44.625521    3510 start.go:340] cluster config:
	{Name:ha-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-921000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:04:44.630126    3510 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:04:44.637753    3510 out.go:177] * Starting "ha-921000" primary control-plane node in "ha-921000" cluster
	I1028 04:04:44.641876    3510 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:04:44.641893    3510 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:04:44.641903    3510 cache.go:56] Caching tarball of preloaded images
	I1028 04:04:44.641986    3510 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:04:44.641993    3510 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:04:44.642047    3510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/ha-921000/config.json ...
	I1028 04:04:44.642485    3510 start.go:360] acquireMachinesLock for ha-921000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:04:44.642539    3510 start.go:364] duration metric: took 47.042µs to acquireMachinesLock for "ha-921000"
	I1028 04:04:44.642549    3510 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:04:44.642553    3510 fix.go:54] fixHost starting: 
	I1028 04:04:44.642680    3510 fix.go:112] recreateIfNeeded on ha-921000: state=Stopped err=<nil>
	W1028 04:04:44.642687    3510 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:04:44.650833    3510 out.go:177] * Restarting existing qemu2 VM for "ha-921000" ...
	I1028 04:04:44.654878    3510 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:04:44.654919    3510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:0c:76:dc:d9:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/ha-921000/disk.qcow2
	I1028 04:04:44.697188    3510 main.go:141] libmachine: STDOUT: 
	I1028 04:04:44.697222    3510 main.go:141] libmachine: STDERR: 
	I1028 04:04:44.697226    3510 main.go:141] libmachine: Attempt 0
	I1028 04:04:44.697255    3510 main.go:141] libmachine: Searching for f2:0c:76:dc:d9:10 in /var/db/dhcpd_leases ...
	I1028 04:04:44.697340    3510 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1028 04:04:44.697356    3510 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:f2:0c:76:dc:d9:10 ID:1,f2:c:76:dc:d9:10 Lease:0x671f6fc9}
	I1028 04:04:44.697366    3510 main.go:141] libmachine: Found match: f2:0c:76:dc:d9:10
	I1028 04:04:44.697375    3510 main.go:141] libmachine: IP: 192.168.105.6
	I1028 04:04:44.697381    3510 main.go:141] libmachine: Waiting for VM to start (ssh -p 0 docker@192.168.105.6)...

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-921000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-921000
ha_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-921000: context deadline exceeded (375ns)
ha_test.go:476: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-921000" : context deadline exceeded
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-921000	

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-921000 -n ha-921000: exit status 7 (35.056917ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 04:21:03.107470    3705 status.go:393] failed to get driver ip: parsing IP: 
	E1028 04:21:03.107479    3705 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-921000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (983.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (725.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-990000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1028 04:21:46.002741    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:23:42.910796    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:25:06.286906    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:28:09.379851    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:28:42.912431    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:30:06.286491    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:33:42.914110    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-990000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 52 (12m5.260807208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"071b2722-9a8e-462e-b010-ce985fb134e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"16b438f6-4cce-47d6-860f-ca2ad7511792","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"5753b9d8-988e-4942-8c0f-386592a6faa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig"}}
	{"specversion":"1.0","id":"ae4af745-7dd4-44e0-b2eb-5b7a078345e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"be964e17-ed44-4b53-b979-f97ae8264b6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"33a13651-3505-41a4-8b89-376d5ec158c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube"}}
	{"specversion":"1.0","id":"b3a8ff72-f49d-406e-8ef3-b8530d2c8a05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fd31b25d-2600-4a46-8ccb-86054dbabbd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f2d36b6-df8b-486b-96f4-f5b5da7e7d70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"eaab596e-ae3a-4e26-a936-0643d0e27021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-990000\" primary control-plane node in \"json-output-990000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"52d1494d-9264-4818-98d2-f39826967ef6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"fad0e3aa-bc98-4e95-a6bc-76dfd77b7154","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-990000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"da7565f7-4d4b-49d6-927d-1a11ff7f7767","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"72290528-cecf-4a52-b2b7-234c80292a1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"80fcd909-dfbd-4e27-8c36-d86eeb49b53b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-990000\" may fix it: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"555b9097-e739-4da4-a581-52ab34c4c0e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try 'minikube delete', and disable any conflicting VPN or firewall software","exitcode":"52","issues":"https://github.com/kubernetes/minikube/issues/7072","message":"Failed to start host: creating host: create host timed out in 360.000000 seconds","name":"DRV_CREATE_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-990000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 52
--- FAIL: TestJSONOutput/start/Command (725.26s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-990000" in qemu2 ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 071b2722-9a8e-462e-b010-ce985fb134e6
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 16b438f6-4cce-47d6-860f-ca2ad7511792
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19876"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5753b9d8-988e-4942-8c0f-386592a6faa3
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: ae4af745-7dd4-44e0-b2eb-5b7a078345e4
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: be964e17-ed44-4b53-b979-f97ae8264b6b
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 33a13651-3505-41a4-8b89-376d5ec158c9
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b3a8ff72-f49d-406e-8ef3-b8530d2c8a05
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fd31b25d-2600-4a46-8ccb-86054dbabbd5
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7f2d36b6-df8b-486b-96f4-f5b5da7e7d70
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: eaab596e-ae3a-4e26-a936-0643d0e27021
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-990000\" primary control-plane node in \"json-output-990000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 52d1494d-9264-4818-98d2-f39826967ef6
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fad0e3aa-bc98-4e95-a6bc-76dfd77b7154
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-990000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: da7565f7-4d4b-49d6-927d-1a11ff7f7767
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 72290528-cecf-4a52-b2b7-234c80292a1e
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 80fcd909-dfbd-4e27-8c36-d86eeb49b53b
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-990000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 555b9097-e739-4da4-a581-52ab34c4c0e0
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 071b2722-9a8e-462e-b010-ce985fb134e6
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 16b438f6-4cce-47d6-860f-ca2ad7511792
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19876"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5753b9d8-988e-4942-8c0f-386592a6faa3
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: ae4af745-7dd4-44e0-b2eb-5b7a078345e4
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: be964e17-ed44-4b53-b979-f97ae8264b6b
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 33a13651-3505-41a4-8b89-376d5ec158c9
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b3a8ff72-f49d-406e-8ef3-b8530d2c8a05
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fd31b25d-2600-4a46-8ccb-86054dbabbd5
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7f2d36b6-df8b-486b-96f4-f5b5da7e7d70
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: eaab596e-ae3a-4e26-a936-0643d0e27021
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-990000\" primary control-plane node in \"json-output-990000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 52d1494d-9264-4818-98d2-f39826967ef6
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fad0e3aa-bc98-4e95-a6bc-76dfd77b7154
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-990000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: da7565f7-4d4b-49d6-927d-1a11ff7f7767
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 72290528-cecf-4a52-b2b7-234c80292a1e
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 80fcd909-dfbd-4e27-8c36-d86eeb49b53b
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-990000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 555b9097-e739-4da4-a581-52ab34c4c0e0
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-990000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-990000 --output=json --user=testUser: exit status 50 (86.487208ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6e8d679-f3c1-4510-8e0b-18011768e237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Recreate the cluster by running:\n\t\tminikube delete {{.profileArg}}\n\t\tminikube start {{.profileArg}}","exitcode":"50","issues":"","message":"Unable to get control-plane node json-output-990000 endpoint: failed to lookup ip for \"\"","name":"DRV_CP_ENDPOINT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-990000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-990000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-990000 --output=json --user=testUser: exit status 50 (58.803958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node json-output-990000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-990000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-809000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E1028 04:35:06.290402    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-809000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.049569208s)

                                                
                                                
-- stdout --
	* [mount-start-1-809000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-809000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-809000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-809000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-809000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-809000 -n mount-start-1-809000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-809000 -n mount-start-1-809000: exit status 7 (76.991167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-809000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-677000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-677000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.768777042s)

                                                
                                                
-- stdout --
	* [multinode-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-677000" primary control-plane node in "multinode-677000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-677000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:35:13.495813    4258 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:35:13.495966    4258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:35:13.495969    4258 out.go:358] Setting ErrFile to fd 2...
	I1028 04:35:13.495972    4258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:35:13.496091    4258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:35:13.497242    4258 out.go:352] Setting JSON to false
	I1028 04:35:13.514797    4258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3884,"bootTime":1730111429,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:35:13.514875    4258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:35:13.520946    4258 out.go:177] * [multinode-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:35:13.528853    4258 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:35:13.528926    4258 notify.go:220] Checking for updates...
	I1028 04:35:13.535802    4258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:35:13.538797    4258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:35:13.542847    4258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:35:13.545733    4258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:35:13.548828    4258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:35:13.552033    4258 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:35:13.554687    4258 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:35:13.561800    4258 start.go:297] selected driver: qemu2
	I1028 04:35:13.561807    4258 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:35:13.561815    4258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:35:13.564328    4258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:35:13.565536    4258 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:35:13.568914    4258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:35:13.568948    4258 cni.go:84] Creating CNI manager for ""
	I1028 04:35:13.568970    4258 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 04:35:13.568976    4258 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 04:35:13.569005    4258 start.go:340] cluster config:
	{Name:multinode-677000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:35:13.573591    4258 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:35:13.582811    4258 out.go:177] * Starting "multinode-677000" primary control-plane node in "multinode-677000" cluster
	I1028 04:35:13.586811    4258 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:35:13.586825    4258 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:35:13.586832    4258 cache.go:56] Caching tarball of preloaded images
	I1028 04:35:13.586902    4258 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:35:13.586908    4258 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:35:13.587130    4258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/multinode-677000/config.json ...
	I1028 04:35:13.587142    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/multinode-677000/config.json: {Name:mkbb61f5c809e556d176e60c91964eeaf1293b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:35:13.587407    4258 start.go:360] acquireMachinesLock for multinode-677000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:35:13.587461    4258 start.go:364] duration metric: took 46.917µs to acquireMachinesLock for "multinode-677000"
	I1028 04:35:13.587474    4258 start.go:93] Provisioning new machine with config: &{Name:multinode-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:35:13.587500    4258 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:35:13.591763    4258 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:35:13.609430    4258 start.go:159] libmachine.API.Create for "multinode-677000" (driver="qemu2")
	I1028 04:35:13.609461    4258 client.go:168] LocalClient.Create starting
	I1028 04:35:13.609531    4258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:35:13.609573    4258 main.go:141] libmachine: Decoding PEM data...
	I1028 04:35:13.609584    4258 main.go:141] libmachine: Parsing certificate...
	I1028 04:35:13.609637    4258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:35:13.609668    4258 main.go:141] libmachine: Decoding PEM data...
	I1028 04:35:13.609676    4258 main.go:141] libmachine: Parsing certificate...
	I1028 04:35:13.610142    4258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:35:13.765854    4258 main.go:141] libmachine: Creating SSH key...
	I1028 04:35:13.816098    4258 main.go:141] libmachine: Creating Disk image...
	I1028 04:35:13.816104    4258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:35:13.816280    4258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:35:13.826046    4258 main.go:141] libmachine: STDOUT: 
	I1028 04:35:13.826067    4258 main.go:141] libmachine: STDERR: 
	I1028 04:35:13.826129    4258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2 +20000M
	I1028 04:35:13.834553    4258 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:35:13.834570    4258 main.go:141] libmachine: STDERR: 
	I1028 04:35:13.834591    4258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:35:13.834597    4258 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:35:13.834609    4258 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:35:13.834649    4258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:2e:88:65:8a:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:35:13.836427    4258 main.go:141] libmachine: STDOUT: 
	I1028 04:35:13.836442    4258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:35:13.836459    4258 client.go:171] duration metric: took 226.991833ms to LocalClient.Create
	I1028 04:35:15.838644    4258 start.go:128] duration metric: took 2.251113417s to createHost
	I1028 04:35:15.838706    4258 start.go:83] releasing machines lock for "multinode-677000", held for 2.251223375s
	W1028 04:35:15.838759    4258 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:35:15.855924    4258 out.go:177] * Deleting "multinode-677000" in qemu2 ...
	W1028 04:35:15.881630    4258 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:35:15.881681    4258 start.go:729] Will try again in 5 seconds ...
	I1028 04:35:20.883878    4258 start.go:360] acquireMachinesLock for multinode-677000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:35:20.884407    4258 start.go:364] duration metric: took 445.834µs to acquireMachinesLock for "multinode-677000"
	I1028 04:35:20.884522    4258 start.go:93] Provisioning new machine with config: &{Name:multinode-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:35:20.884820    4258 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:35:20.898505    4258 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:35:20.948108    4258 start.go:159] libmachine.API.Create for "multinode-677000" (driver="qemu2")
	I1028 04:35:20.948163    4258 client.go:168] LocalClient.Create starting
	I1028 04:35:20.948299    4258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:35:20.948387    4258 main.go:141] libmachine: Decoding PEM data...
	I1028 04:35:20.948408    4258 main.go:141] libmachine: Parsing certificate...
	I1028 04:35:20.948504    4258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:35:20.948560    4258 main.go:141] libmachine: Decoding PEM data...
	I1028 04:35:20.948574    4258 main.go:141] libmachine: Parsing certificate...
	I1028 04:35:20.949242    4258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:35:21.117762    4258 main.go:141] libmachine: Creating SSH key...
	I1028 04:35:21.163966    4258 main.go:141] libmachine: Creating Disk image...
	I1028 04:35:21.163976    4258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:35:21.164147    4258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:35:21.173930    4258 main.go:141] libmachine: STDOUT: 
	I1028 04:35:21.173953    4258 main.go:141] libmachine: STDERR: 
	I1028 04:35:21.174024    4258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2 +20000M
	I1028 04:35:21.182480    4258 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:35:21.182502    4258 main.go:141] libmachine: STDERR: 
	I1028 04:35:21.182515    4258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:35:21.182519    4258 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:35:21.182525    4258 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:35:21.182562    4258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:2e:26:cf:5e:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:35:21.184345    4258 main.go:141] libmachine: STDOUT: 
	I1028 04:35:21.184361    4258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:35:21.184372    4258 client.go:171] duration metric: took 236.201541ms to LocalClient.Create
	I1028 04:35:23.186555    4258 start.go:128] duration metric: took 2.301694458s to createHost
	I1028 04:35:23.186654    4258 start.go:83] releasing machines lock for "multinode-677000", held for 2.30220925s
	W1028 04:35:23.187018    4258 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-677000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-677000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:35:23.200689    4258 out.go:201] 
	W1028 04:35:23.204731    4258 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:35:23.204756    4258 out.go:270] * 
	* 
	W1028 04:35:23.207725    4258 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:35:23.217661    4258 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-677000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (73.171458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (76.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (129.364083ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-677000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- rollout status deployment/busybox: exit status 1 (62.389ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.2465ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:35:23.561501    1598 retry.go:31] will retry after 1.003577506s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.747791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:35:24.674253    1598 retry.go:31] will retry after 2.01745222s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.341584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:35:26.801417    1598 retry.go:31] will retry after 2.512503489s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.476958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:35:29.423839    1598 retry.go:31] will retry after 1.733023097s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.181041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:35:31.268445    1598 retry.go:31] will retry after 2.716666241s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.07925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:35:34.096599    1598 retry.go:31] will retry after 11.27626517s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.550041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:35:45.483944    1598 retry.go:31] will retry after 8.524359192s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.866667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:35:54.119604    1598 retry.go:31] will retry after 23.012517271s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.65525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:36:17.243520    1598 retry.go:31] will retry after 22.566915323s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.797292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.675834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.359792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.772959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.882709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (33.979875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (76.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-677000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.733375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (34.958791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-677000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-677000 -v 3 --alsologtostderr: exit status 83 (47.289667ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-677000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-677000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:40.335977    4334 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:40.336348    4334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.336351    4334 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:40.336354    4334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.336502    4334 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:40.336723    4334 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:40.336933    4334 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:40.341716    4334 out.go:177] * The control-plane node multinode-677000 host is not running: state=Stopped
	I1028 04:36:40.345622    4334 out.go:177]   To start a cluster, run: "minikube start -p multinode-677000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-677000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (35.164417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-677000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-677000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (33.523375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-677000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-677000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-677000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (34.980083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-677000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-677000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-677000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-677000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (35.289125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status --output json --alsologtostderr: exit status 7 (34.241ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-677000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:40.576641    4346 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:40.576805    4346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.576808    4346 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:40.576811    4346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.576934    4346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:40.577069    4346 out.go:352] Setting JSON to true
	I1028 04:36:40.577080    4346 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:40.577139    4346 notify.go:220] Checking for updates...
	I1028 04:36:40.577272    4346 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:40.577281    4346 status.go:174] checking status of multinode-677000 ...
	I1028 04:36:40.577523    4346 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:36:40.577527    4346 status.go:384] host is not running, skipping remaining checks
	I1028 04:36:40.577529    4346 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-677000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (34.274417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 node stop m03: exit status 85 (52.02775ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-677000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status: exit status 7 (34.968375ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr: exit status 7 (34.522542ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:40.732329    4354 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:40.732511    4354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.732514    4354 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:40.732517    4354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.732641    4354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:40.732775    4354 out.go:352] Setting JSON to false
	I1028 04:36:40.732789    4354 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:40.732855    4354 notify.go:220] Checking for updates...
	I1028 04:36:40.733863    4354 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:40.733875    4354 status.go:174] checking status of multinode-677000 ...
	I1028 04:36:40.734108    4354 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:36:40.734112    4354 status.go:384] host is not running, skipping remaining checks
	I1028 04:36:40.734114    4354 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr": multinode-677000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (34.482291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 node start m03 -v=7 --alsologtostderr: exit status 85 (50.367458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:40.802730    4358 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:40.803018    4358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.803021    4358 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:40.803024    4358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.803173    4358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:40.803433    4358 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:40.803639    4358 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:40.807405    4358 out.go:201] 
	W1028 04:36:40.810643    4358 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1028 04:36:40.810648    4358 out.go:270] * 
	* 
	W1028 04:36:40.812129    4358 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:36:40.815651    4358 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1028 04:36:40.802730    4358 out.go:345] Setting OutFile to fd 1 ...
I1028 04:36:40.803018    4358 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:36:40.803021    4358 out.go:358] Setting ErrFile to fd 2...
I1028 04:36:40.803024    4358 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:36:40.803173    4358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
I1028 04:36:40.803433    4358 mustload.go:65] Loading cluster: multinode-677000
I1028 04:36:40.803639    4358 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:36:40.807405    4358 out.go:201] 
W1028 04:36:40.810643    4358 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1028 04:36:40.810648    4358 out.go:270] * 
* 
W1028 04:36:40.812129    4358 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 04:36:40.815651    4358 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-677000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (34.545166ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:40.853090    4360 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:40.853258    4360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.853262    4360 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:40.853264    4360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:40.853409    4360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:40.853538    4360 out.go:352] Setting JSON to false
	I1028 04:36:40.853550    4360 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:40.853594    4360 notify.go:220] Checking for updates...
	I1028 04:36:40.853791    4360 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:40.853800    4360 status.go:174] checking status of multinode-677000 ...
	I1028 04:36:40.854047    4360 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:36:40.854050    4360 status.go:384] host is not running, skipping remaining checks
	I1028 04:36:40.854053    4360 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:36:40.854972    1598 retry.go:31] will retry after 1.380558902s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (79.183083ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:42.313597    4365 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:42.313836    4365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:42.313840    4365 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:42.313843    4365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:42.314004    4365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:42.314171    4365 out.go:352] Setting JSON to false
	I1028 04:36:42.314184    4365 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:42.314225    4365 notify.go:220] Checking for updates...
	I1028 04:36:42.314443    4365 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:42.314453    4365 status.go:174] checking status of multinode-677000 ...
	I1028 04:36:42.314744    4365 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:36:42.314748    4365 status.go:384] host is not running, skipping remaining checks
	I1028 04:36:42.314751    4365 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:36:42.315803    1598 retry.go:31] will retry after 1.014645815s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (79.99025ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:43.410306    4367 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:43.410565    4367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:43.410569    4367 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:43.410573    4367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:43.410749    4367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:43.410922    4367 out.go:352] Setting JSON to false
	I1028 04:36:43.410937    4367 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:43.410981    4367 notify.go:220] Checking for updates...
	I1028 04:36:43.411211    4367 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:43.411222    4367 status.go:174] checking status of multinode-677000 ...
	I1028 04:36:43.411514    4367 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:36:43.411518    4367 status.go:384] host is not running, skipping remaining checks
	I1028 04:36:43.411520    4367 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:36:43.412498    1598 retry.go:31] will retry after 2.483203667s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (79.087333ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:45.975097    4369 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:45.975339    4369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:45.975343    4369 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:45.975347    4369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:45.975511    4369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:45.975673    4369 out.go:352] Setting JSON to false
	I1028 04:36:45.975687    4369 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:45.975722    4369 notify.go:220] Checking for updates...
	I1028 04:36:45.975940    4369 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:45.975950    4369 status.go:174] checking status of multinode-677000 ...
	I1028 04:36:45.976254    4369 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:36:45.976259    4369 status.go:384] host is not running, skipping remaining checks
	I1028 04:36:45.976261    4369 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:36:45.977239    1598 retry.go:31] will retry after 3.442103845s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (78.986667ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:49.498474    4371 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:49.498738    4371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:49.498742    4371 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:49.498746    4371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:49.498924    4371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:49.499068    4371 out.go:352] Setting JSON to false
	I1028 04:36:49.499081    4371 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:49.499128    4371 notify.go:220] Checking for updates...
	I1028 04:36:49.499348    4371 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:49.499359    4371 status.go:174] checking status of multinode-677000 ...
	I1028 04:36:49.499685    4371 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:36:49.499690    4371 status.go:384] host is not running, skipping remaining checks
	I1028 04:36:49.499693    4371 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:36:49.500725    1598 retry.go:31] will retry after 6.098994561s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (79.168542ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:36:55.679201    4375 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:36:55.679420    4375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:55.679424    4375 out.go:358] Setting ErrFile to fd 2...
	I1028 04:36:55.679428    4375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:36:55.679598    4375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:36:55.679751    4375 out.go:352] Setting JSON to false
	I1028 04:36:55.679764    4375 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:36:55.679814    4375 notify.go:220] Checking for updates...
	I1028 04:36:55.680027    4375 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:36:55.680037    4375 status.go:174] checking status of multinode-677000 ...
	I1028 04:36:55.680330    4375 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:36:55.680335    4375 status.go:384] host is not running, skipping remaining checks
	I1028 04:36:55.680337    4375 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:36:55.681352    1598 retry.go:31] will retry after 7.167185527s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (77.918333ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:37:02.926794    4381 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:37:02.927010    4381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:02.927014    4381 out.go:358] Setting ErrFile to fd 2...
	I1028 04:37:02.927018    4381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:02.927156    4381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:37:02.927319    4381 out.go:352] Setting JSON to false
	I1028 04:37:02.927333    4381 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:37:02.927384    4381 notify.go:220] Checking for updates...
	I1028 04:37:02.927596    4381 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:37:02.927606    4381 status.go:174] checking status of multinode-677000 ...
	I1028 04:37:02.927907    4381 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:37:02.927912    4381 status.go:384] host is not running, skipping remaining checks
	I1028 04:37:02.927914    4381 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:37:02.928931    1598 retry.go:31] will retry after 8.579109755s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (79.101416ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:37:11.587388    4384 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:37:11.587616    4384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:11.587620    4384 out.go:358] Setting ErrFile to fd 2...
	I1028 04:37:11.587623    4384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:11.587794    4384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:37:11.587953    4384 out.go:352] Setting JSON to false
	I1028 04:37:11.587968    4384 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:37:11.588031    4384 notify.go:220] Checking for updates...
	I1028 04:37:11.588243    4384 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:37:11.588256    4384 status.go:174] checking status of multinode-677000 ...
	I1028 04:37:11.588581    4384 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:37:11.588586    4384 status.go:384] host is not running, skipping remaining checks
	I1028 04:37:11.588588    4384 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:37:11.589628    1598 retry.go:31] will retry after 14.578139659s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr: exit status 7 (77.48875ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:37:26.245591    4388 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:37:26.245834    4388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:26.245839    4388 out.go:358] Setting ErrFile to fd 2...
	I1028 04:37:26.245842    4388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:26.245997    4388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:37:26.246147    4388 out.go:352] Setting JSON to false
	I1028 04:37:26.246161    4388 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:37:26.246198    4388 notify.go:220] Checking for updates...
	I1028 04:37:26.246415    4388 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:37:26.246425    4388 status.go:174] checking status of multinode-677000 ...
	I1028 04:37:26.246749    4388 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:37:26.246754    4388 status.go:384] host is not running, skipping remaining checks
	I1028 04:37:26.246757    4388 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-677000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (36.454958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-677000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-677000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-677000: (3.616570209s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-677000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-677000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.226762958s)

                                                
                                                
-- stdout --
	* [multinode-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-677000" primary control-plane node in "multinode-677000" cluster
	* Restarting existing qemu2 VM for "multinode-677000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-677000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:37:30.006144    4412 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:37:30.006339    4412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:30.006344    4412 out.go:358] Setting ErrFile to fd 2...
	I1028 04:37:30.006346    4412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:30.006522    4412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:37:30.007817    4412 out.go:352] Setting JSON to false
	I1028 04:37:30.027859    4412 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4021,"bootTime":1730111429,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:37:30.027942    4412 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:37:30.031897    4412 out.go:177] * [multinode-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:37:30.040605    4412 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:37:30.040663    4412 notify.go:220] Checking for updates...
	I1028 04:37:30.046802    4412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:37:30.048172    4412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:37:30.050739    4412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:37:30.053784    4412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:37:30.056798    4412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:37:30.060078    4412 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:37:30.060131    4412 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:37:30.064711    4412 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:37:30.071795    4412 start.go:297] selected driver: qemu2
	I1028 04:37:30.071804    4412 start.go:901] validating driver "qemu2" against &{Name:multinode-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:37:30.071875    4412 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:37:30.074352    4412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:37:30.074380    4412 cni.go:84] Creating CNI manager for ""
	I1028 04:37:30.074407    4412 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 04:37:30.074464    4412 start.go:340] cluster config:
	{Name:multinode-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-677000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:37:30.078936    4412 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:37:30.086672    4412 out.go:177] * Starting "multinode-677000" primary control-plane node in "multinode-677000" cluster
	I1028 04:37:30.090804    4412 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:37:30.090819    4412 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:37:30.090828    4412 cache.go:56] Caching tarball of preloaded images
	I1028 04:37:30.090896    4412 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:37:30.090902    4412 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:37:30.090963    4412 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/multinode-677000/config.json ...
	I1028 04:37:30.091477    4412 start.go:360] acquireMachinesLock for multinode-677000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:37:30.091523    4412 start.go:364] duration metric: took 39.542µs to acquireMachinesLock for "multinode-677000"
	I1028 04:37:30.091531    4412 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:37:30.091535    4412 fix.go:54] fixHost starting: 
	I1028 04:37:30.091644    4412 fix.go:112] recreateIfNeeded on multinode-677000: state=Stopped err=<nil>
	W1028 04:37:30.091651    4412 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:37:30.094805    4412 out.go:177] * Restarting existing qemu2 VM for "multinode-677000" ...
	I1028 04:37:30.102834    4412 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:37:30.102881    4412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:2e:26:cf:5e:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:37:30.105128    4412 main.go:141] libmachine: STDOUT: 
	I1028 04:37:30.105148    4412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:37:30.105176    4412 fix.go:56] duration metric: took 13.639458ms for fixHost
	I1028 04:37:30.105191    4412 start.go:83] releasing machines lock for "multinode-677000", held for 13.654542ms
	W1028 04:37:30.105197    4412 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:37:30.105241    4412 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:37:30.105246    4412 start.go:729] Will try again in 5 seconds ...
	I1028 04:37:35.107373    4412 start.go:360] acquireMachinesLock for multinode-677000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:37:35.107791    4412 start.go:364] duration metric: took 313.458µs to acquireMachinesLock for "multinode-677000"
	I1028 04:37:35.107973    4412 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:37:35.107991    4412 fix.go:54] fixHost starting: 
	I1028 04:37:35.108782    4412 fix.go:112] recreateIfNeeded on multinode-677000: state=Stopped err=<nil>
	W1028 04:37:35.108807    4412 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:37:35.112078    4412 out.go:177] * Restarting existing qemu2 VM for "multinode-677000" ...
	I1028 04:37:35.116152    4412 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:37:35.116385    4412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:2e:26:cf:5e:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:37:35.126272    4412 main.go:141] libmachine: STDOUT: 
	I1028 04:37:35.126344    4412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:37:35.126430    4412 fix.go:56] duration metric: took 18.439417ms for fixHost
	I1028 04:37:35.126455    4412 start.go:83] releasing machines lock for "multinode-677000", held for 18.594916ms
	W1028 04:37:35.126649    4412 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-677000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-677000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:37:35.135104    4412 out.go:201] 
	W1028 04:37:35.139253    4412 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:37:35.139276    4412 out.go:270] * 
	* 
	W1028 04:37:35.141958    4412 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:37:35.150101    4412 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-677000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-677000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (36.458958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 node delete m03: exit status 83 (45.0045ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-677000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-677000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-677000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr: exit status 7 (34.557ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:37:35.352759    4426 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:37:35.352945    4426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:35.352948    4426 out.go:358] Setting ErrFile to fd 2...
	I1028 04:37:35.352950    4426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:35.353095    4426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:37:35.353233    4426 out.go:352] Setting JSON to false
	I1028 04:37:35.353246    4426 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:37:35.353291    4426 notify.go:220] Checking for updates...
	I1028 04:37:35.353465    4426 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:37:35.353473    4426 status.go:174] checking status of multinode-677000 ...
	I1028 04:37:35.353723    4426 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:37:35.353727    4426 status.go:384] host is not running, skipping remaining checks
	I1028 04:37:35.353729    4426 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (33.993417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-677000 stop: (3.256782s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status: exit status 7 (69.540541ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr: exit status 7 (35.083417ms)

                                                
                                                
-- stdout --
	multinode-677000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:37:38.748805    4450 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:37:38.749002    4450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:38.749005    4450 out.go:358] Setting ErrFile to fd 2...
	I1028 04:37:38.749007    4450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:38.749135    4450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:37:38.749257    4450 out.go:352] Setting JSON to false
	I1028 04:37:38.749268    4450 mustload.go:65] Loading cluster: multinode-677000
	I1028 04:37:38.749319    4450 notify.go:220] Checking for updates...
	I1028 04:37:38.749507    4450 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:37:38.749516    4450 status.go:174] checking status of multinode-677000 ...
	I1028 04:37:38.749775    4450 status.go:371] multinode-677000 host status = "Stopped" (err=<nil>)
	I1028 04:37:38.749778    4450 status.go:384] host is not running, skipping remaining checks
	I1028 04:37:38.749781    4450 status.go:176] multinode-677000 status: &{Name:multinode-677000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr": multinode-677000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-677000 status --alsologtostderr": multinode-677000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (33.899417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-677000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-677000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.193952958s)

                                                
                                                
-- stdout --
	* [multinode-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-677000" primary control-plane node in "multinode-677000" cluster
	* Restarting existing qemu2 VM for "multinode-677000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-677000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:37:38.817138    4454 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:37:38.817290    4454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:38.817294    4454 out.go:358] Setting ErrFile to fd 2...
	I1028 04:37:38.817296    4454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:37:38.817417    4454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:37:38.818457    4454 out.go:352] Setting JSON to false
	I1028 04:37:38.836172    4454 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4029,"bootTime":1730111429,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:37:38.836250    4454 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:37:38.841536    4454 out.go:177] * [multinode-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:37:38.848373    4454 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:37:38.848470    4454 notify.go:220] Checking for updates...
	I1028 04:37:38.856358    4454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:37:38.860400    4454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:37:38.863423    4454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:37:38.866404    4454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:37:38.869378    4454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:37:38.872645    4454 config.go:182] Loaded profile config "multinode-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:37:38.872929    4454 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:37:38.876368    4454 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:37:38.883370    4454 start.go:297] selected driver: qemu2
	I1028 04:37:38.883380    4454 start.go:901] validating driver "qemu2" against &{Name:multinode-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:37:38.883440    4454 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:37:38.886010    4454 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:37:38.886040    4454 cni.go:84] Creating CNI manager for ""
	I1028 04:37:38.886062    4454 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 04:37:38.886106    4454 start.go:340] cluster config:
	{Name:multinode-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-677000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:37:38.890564    4454 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:37:38.898392    4454 out.go:177] * Starting "multinode-677000" primary control-plane node in "multinode-677000" cluster
	I1028 04:37:38.901329    4454 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:37:38.901349    4454 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:37:38.901362    4454 cache.go:56] Caching tarball of preloaded images
	I1028 04:37:38.901428    4454 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:37:38.901434    4454 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:37:38.901496    4454 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/multinode-677000/config.json ...
	I1028 04:37:38.901841    4454 start.go:360] acquireMachinesLock for multinode-677000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:37:38.901872    4454 start.go:364] duration metric: took 25µs to acquireMachinesLock for "multinode-677000"
	I1028 04:37:38.901887    4454 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:37:38.901892    4454 fix.go:54] fixHost starting: 
	I1028 04:37:38.902016    4454 fix.go:112] recreateIfNeeded on multinode-677000: state=Stopped err=<nil>
	W1028 04:37:38.902025    4454 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:37:38.910235    4454 out.go:177] * Restarting existing qemu2 VM for "multinode-677000" ...
	I1028 04:37:38.914335    4454 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:37:38.914374    4454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:2e:26:cf:5e:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:37:38.916666    4454 main.go:141] libmachine: STDOUT: 
	I1028 04:37:38.916692    4454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:37:38.916726    4454 fix.go:56] duration metric: took 14.831833ms for fixHost
	I1028 04:37:38.916731    4454 start.go:83] releasing machines lock for "multinode-677000", held for 14.855208ms
	W1028 04:37:38.916740    4454 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:37:38.916782    4454 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:37:38.916787    4454 start.go:729] Will try again in 5 seconds ...
	I1028 04:37:43.918913    4454 start.go:360] acquireMachinesLock for multinode-677000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:37:43.919388    4454 start.go:364] duration metric: took 371.625µs to acquireMachinesLock for "multinode-677000"
	I1028 04:37:43.919487    4454 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:37:43.919507    4454 fix.go:54] fixHost starting: 
	I1028 04:37:43.920128    4454 fix.go:112] recreateIfNeeded on multinode-677000: state=Stopped err=<nil>
	W1028 04:37:43.920152    4454 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:37:43.928473    4454 out.go:177] * Restarting existing qemu2 VM for "multinode-677000" ...
	I1028 04:37:43.932443    4454 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:37:43.932638    4454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:2e:26:cf:5e:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/multinode-677000/disk.qcow2
	I1028 04:37:43.942169    4454 main.go:141] libmachine: STDOUT: 
	I1028 04:37:43.942231    4454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:37:43.942292    4454 fix.go:56] duration metric: took 22.787792ms for fixHost
	I1028 04:37:43.942311    4454 start.go:83] releasing machines lock for "multinode-677000", held for 22.901542ms
	W1028 04:37:43.942484    4454 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-677000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-677000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:37:43.950421    4454 out.go:201] 
	W1028 04:37:43.954507    4454 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:37:43.954538    4454 out.go:270] * 
	* 
	W1028 04:37:43.957144    4454 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:37:43.965470    4454 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-677000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (73.858917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-677000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-677000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-677000-m01 --driver=qemu2 : exit status 80 (9.823046292s)

                                                
                                                
-- stdout --
	* [multinode-677000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-677000-m01" primary control-plane node in "multinode-677000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-677000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-677000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-677000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-677000-m02 --driver=qemu2 : exit status 80 (9.947967959s)

                                                
                                                
-- stdout --
	* [multinode-677000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-677000-m02" primary control-plane node in "multinode-677000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-677000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-677000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-677000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-677000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-677000: exit status 83 (89.364208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-677000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-677000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-677000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-677000 -n multinode-677000: exit status 7 (35.264584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-677000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.02s)

                                                
                                    
x
+
TestPreload (10.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-277000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-277000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.938713542s)

                                                
                                                
-- stdout --
	* [test-preload-277000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-277000" primary control-plane node in "test-preload-277000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-277000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:38:04.222975    4506 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:38:04.223109    4506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:04.223112    4506 out.go:358] Setting ErrFile to fd 2...
	I1028 04:38:04.223115    4506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:38:04.223244    4506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:38:04.224357    4506 out.go:352] Setting JSON to false
	I1028 04:38:04.242247    4506 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4055,"bootTime":1730111429,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:38:04.242321    4506 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:38:04.247317    4506 out.go:177] * [test-preload-277000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:38:04.255351    4506 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:38:04.255427    4506 notify.go:220] Checking for updates...
	I1028 04:38:04.262309    4506 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:38:04.265328    4506 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:38:04.268267    4506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:38:04.271322    4506 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:38:04.274306    4506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:38:04.277629    4506 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:38:04.277677    4506 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:38:04.282238    4506 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:38:04.289257    4506 start.go:297] selected driver: qemu2
	I1028 04:38:04.289263    4506 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:38:04.289269    4506 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:38:04.291799    4506 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:38:04.295303    4506 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:38:04.298552    4506 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:38:04.298582    4506 cni.go:84] Creating CNI manager for ""
	I1028 04:38:04.298606    4506 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:38:04.298611    4506 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:38:04.298641    4506 start.go:340] cluster config:
	{Name:test-preload-277000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-277000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:38:04.303247    4506 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.311355    4506 out.go:177] * Starting "test-preload-277000" primary control-plane node in "test-preload-277000" cluster
	I1028 04:38:04.314294    4506 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1028 04:38:04.314386    4506 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/test-preload-277000/config.json ...
	I1028 04:38:04.314404    4506 cache.go:107] acquiring lock: {Name:mk8f7fedd57339f55502801ee62a33ecabbf16cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.314409    4506 cache.go:107] acquiring lock: {Name:mkbc988c40fe2cf3b6d3034b13ced1eddb5f3213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.314410    4506 cache.go:107] acquiring lock: {Name:mkbab9aea692ebe07cf187c36649a9749caac4d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.314424    4506 cache.go:107] acquiring lock: {Name:mk85e8f1827c32b97bb12a4d007e23658014a3b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.314443    4506 cache.go:107] acquiring lock: {Name:mk2ee69b91cd5ff48b50ba475bc9fbf5e66d36c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.314412    4506 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/test-preload-277000/config.json: {Name:mk26747ab52a17611f1bd2f34e74059b127cf692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:38:04.314646    4506 cache.go:107] acquiring lock: {Name:mk53161d8b8c16d7e407b55ac502cf3716f94443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.314725    4506 cache.go:107] acquiring lock: {Name:mk1ae175166c486e2d615107c90521d86e1b27c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.314995    4506 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 04:38:04.314727    4506 cache.go:107] acquiring lock: {Name:mk6c2bce02ea763f8f05f30cf740ede5c118e83f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:38:04.315167    4506 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:38:04.315214    4506 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 04:38:04.315327    4506 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 04:38:04.315352    4506 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:38:04.315385    4506 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 04:38:04.315359    4506 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 04:38:04.315515    4506 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:38:04.315512    4506 start.go:360] acquireMachinesLock for test-preload-277000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:04.315570    4506 start.go:364] duration metric: took 47.833µs to acquireMachinesLock for "test-preload-277000"
	I1028 04:38:04.315585    4506 start.go:93] Provisioning new machine with config: &{Name:test-preload-277000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-277000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:04.315639    4506 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:04.324252    4506 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:38:04.327663    4506 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 04:38:04.327700    4506 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:38:04.327791    4506 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:38:04.327804    4506 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:38:04.328110    4506 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 04:38:04.328304    4506 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 04:38:04.328343    4506 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 04:38:04.328829    4506 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 04:38:04.342518    4506 start.go:159] libmachine.API.Create for "test-preload-277000" (driver="qemu2")
	I1028 04:38:04.342541    4506 client.go:168] LocalClient.Create starting
	I1028 04:38:04.342633    4506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:04.342671    4506 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:04.342681    4506 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:04.342723    4506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:04.342754    4506 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:04.342765    4506 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:04.343135    4506 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:38:04.509715    4506 main.go:141] libmachine: Creating SSH key...
	I1028 04:38:04.621580    4506 main.go:141] libmachine: Creating Disk image...
	I1028 04:38:04.621614    4506 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:38:04.621822    4506 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2
	I1028 04:38:04.631814    4506 main.go:141] libmachine: STDOUT: 
	I1028 04:38:04.631835    4506 main.go:141] libmachine: STDERR: 
	I1028 04:38:04.631892    4506 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2 +20000M
	I1028 04:38:04.641684    4506 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:38:04.641703    4506 main.go:141] libmachine: STDERR: 
	I1028 04:38:04.641718    4506 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2
	I1028 04:38:04.641722    4506 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:38:04.641733    4506 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:38:04.641759    4506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:46:65:79:31:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2
	I1028 04:38:04.644065    4506 main.go:141] libmachine: STDOUT: 
	I1028 04:38:04.644112    4506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:38:04.644145    4506 client.go:171] duration metric: took 301.5965ms to LocalClient.Create
	I1028 04:38:04.825422    4506 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1028 04:38:04.869830    4506 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1028 04:38:04.929275    4506 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1028 04:38:04.929316    4506 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 04:38:05.065195    4506 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1028 04:38:05.098027    4506 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1028 04:38:05.181123    4506 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1028 04:38:05.226295    4506 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1028 04:38:05.361143    4506 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1028 04:38:05.361194    4506 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.046599584s
	I1028 04:38:05.361234    4506 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1028 04:38:05.558085    4506 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1028 04:38:05.558174    4506 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 04:38:06.005390    4506 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1028 04:38:06.005459    4506 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.691055s
	I1028 04:38:06.005485    4506 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1028 04:38:06.644368    4506 start.go:128] duration metric: took 2.328683583s to createHost
	I1028 04:38:06.644431    4506 start.go:83] releasing machines lock for "test-preload-277000", held for 2.328836875s
	W1028 04:38:06.644498    4506 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:06.661646    4506 out.go:177] * Deleting "test-preload-277000" in qemu2 ...
	W1028 04:38:06.689210    4506 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:06.689235    4506 start.go:729] Will try again in 5 seconds ...
	I1028 04:38:07.427546    4506 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1028 04:38:07.427613    4506 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.113001s
	I1028 04:38:07.427649    4506 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1028 04:38:07.814385    4506 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1028 04:38:07.814439    4506 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.499689s
	I1028 04:38:07.814471    4506 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1028 04:38:09.640271    4506 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1028 04:38:09.640320    4506 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.325889834s
	I1028 04:38:09.640345    4506 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1028 04:38:09.844552    4506 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1028 04:38:09.844602    4506 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.530142167s
	I1028 04:38:09.844628    4506 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1028 04:38:10.029673    4506 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1028 04:38:10.029717    4506 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.715285666s
	I1028 04:38:10.029739    4506 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1028 04:38:11.689516    4506 start.go:360] acquireMachinesLock for test-preload-277000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:38:11.690061    4506 start.go:364] duration metric: took 465.5µs to acquireMachinesLock for "test-preload-277000"
	I1028 04:38:11.690194    4506 start.go:93] Provisioning new machine with config: &{Name:test-preload-277000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-277000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:38:11.690463    4506 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:38:11.697059    4506 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:38:11.746862    4506 start.go:159] libmachine.API.Create for "test-preload-277000" (driver="qemu2")
	I1028 04:38:11.746926    4506 client.go:168] LocalClient.Create starting
	I1028 04:38:11.747133    4506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:38:11.747233    4506 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:11.747253    4506 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:11.747329    4506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:38:11.747387    4506 main.go:141] libmachine: Decoding PEM data...
	I1028 04:38:11.747400    4506 main.go:141] libmachine: Parsing certificate...
	I1028 04:38:11.748002    4506 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:38:11.913652    4506 main.go:141] libmachine: Creating SSH key...
	I1028 04:38:12.063618    4506 main.go:141] libmachine: Creating Disk image...
	I1028 04:38:12.063630    4506 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:38:12.063830    4506 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2
	I1028 04:38:12.074070    4506 main.go:141] libmachine: STDOUT: 
	I1028 04:38:12.074088    4506 main.go:141] libmachine: STDERR: 
	I1028 04:38:12.074176    4506 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2 +20000M
	I1028 04:38:12.082928    4506 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:38:12.082946    4506 main.go:141] libmachine: STDERR: 
	I1028 04:38:12.082961    4506 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2
	I1028 04:38:12.082966    4506 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:38:12.082974    4506 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:38:12.083011    4506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:12:a7:23:52:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/test-preload-277000/disk.qcow2
	I1028 04:38:12.084898    4506 main.go:141] libmachine: STDOUT: 
	I1028 04:38:12.084915    4506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:38:12.084928    4506 client.go:171] duration metric: took 337.972833ms to LocalClient.Create
	I1028 04:38:14.086438    4506 start.go:128] duration metric: took 2.395937s to createHost
	I1028 04:38:14.086494    4506 start.go:83] releasing machines lock for "test-preload-277000", held for 2.396397125s
	W1028 04:38:14.086762    4506 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-277000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-277000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:38:14.100260    4506 out.go:201] 
	W1028 04:38:14.104169    4506 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:38:14.104193    4506 out.go:270] * 
	* 
	W1028 04:38:14.106696    4506 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:38:14.115181    4506 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-277000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-28 04:38:14.132201 -0700 PDT m=+3502.914751501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-277000 -n test-preload-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-277000 -n test-preload-277000: exit status 7 (74.184584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-277000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-277000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-277000
--- FAIL: TestPreload (10.10s)

                                                
                                    
x
+
TestScheduledStopUnix (10.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-551000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-551000 --memory=2048 --driver=qemu2 : exit status 80 (9.892755084s)

                                                
                                                
-- stdout --
	* [scheduled-stop-551000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-551000" primary control-plane node in "scheduled-stop-551000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-551000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-551000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-551000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-551000" primary control-plane node in "scheduled-stop-551000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-551000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-551000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-28 04:38:24.18112 -0700 PDT m=+3512.963615626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-551000 -n scheduled-stop-551000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-551000 -n scheduled-stop-551000: exit status 7 (74.357083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-551000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-551000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-551000
--- FAIL: TestScheduledStopUnix (10.05s)

                                                
                                    
x
+
TestSkaffold (12.4s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe231838591 version
E1028 04:38:26.011801    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe231838591 version: (1.017519459s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-587000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-587000 --memory=2600 --driver=qemu2 : exit status 80 (9.850279167s)

                                                
                                                
-- stdout --
	* [skaffold-587000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-587000" primary control-plane node in "skaffold-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-587000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-587000" primary control-plane node in "skaffold-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-28 04:38:36.589285 -0700 PDT m=+3525.371713376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-587000 -n skaffold-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-587000 -n skaffold-587000: exit status 7 (69.165709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-587000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-587000
--- FAIL: TestSkaffold (12.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (593.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3945704751 start -p running-upgrade-687000 --memory=2200 --vm-driver=qemu2 
E1028 04:40:06.291946    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3945704751 start -p running-upgrade-687000 --memory=2200 --vm-driver=qemu2 : (54.657480667s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-687000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-687000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.620164542s)

                                                
                                                
-- stdout --
	* [running-upgrade-687000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-687000" primary control-plane node in "running-upgrade-687000" cluster
	* Updating the running qemu2 "running-upgrade-687000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:40:14.545359    4886 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:40:14.545703    4886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:40:14.545710    4886 out.go:358] Setting ErrFile to fd 2...
	I1028 04:40:14.545712    4886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:40:14.545851    4886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:40:14.546989    4886 out.go:352] Setting JSON to false
	I1028 04:40:14.566215    4886 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4185,"bootTime":1730111429,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:40:14.566294    4886 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:40:14.571539    4886 out.go:177] * [running-upgrade-687000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:40:14.579537    4886 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:40:14.579580    4886 notify.go:220] Checking for updates...
	I1028 04:40:14.587522    4886 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:40:14.591549    4886 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:40:14.594555    4886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:40:14.597541    4886 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:40:14.600571    4886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:40:14.603768    4886 config.go:182] Loaded profile config "running-upgrade-687000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:40:14.606503    4886 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 04:40:14.609552    4886 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:40:14.612523    4886 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:40:14.619559    4886 start.go:297] selected driver: qemu2
	I1028 04:40:14.619565    4886 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57028 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:40:14.619620    4886 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:40:14.622215    4886 cni.go:84] Creating CNI manager for ""
	I1028 04:40:14.622244    4886 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:40:14.622269    4886 start.go:340] cluster config:
	{Name:running-upgrade-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57028 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:40:14.622316    4886 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:40:14.629507    4886 out.go:177] * Starting "running-upgrade-687000" primary control-plane node in "running-upgrade-687000" cluster
	I1028 04:40:14.633525    4886 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 04:40:14.633537    4886 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1028 04:40:14.633544    4886 cache.go:56] Caching tarball of preloaded images
	I1028 04:40:14.633609    4886 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:40:14.633614    4886 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1028 04:40:14.633661    4886 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/config.json ...
	I1028 04:40:14.634095    4886 start.go:360] acquireMachinesLock for running-upgrade-687000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:40:14.634142    4886 start.go:364] duration metric: took 41.042µs to acquireMachinesLock for "running-upgrade-687000"
	I1028 04:40:14.634152    4886 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:40:14.634156    4886 fix.go:54] fixHost starting: 
	I1028 04:40:14.634752    4886 fix.go:112] recreateIfNeeded on running-upgrade-687000: state=Running err=<nil>
	W1028 04:40:14.634763    4886 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:40:14.639491    4886 out.go:177] * Updating the running qemu2 "running-upgrade-687000" VM ...
	I1028 04:40:14.647525    4886 machine.go:93] provisionDockerMachine start ...
	I1028 04:40:14.647583    4886 main.go:141] libmachine: Using SSH client type: native
	I1028 04:40:14.647712    4886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f025f0] 0x102f04e30 <nil>  [] 0s} localhost 56996 <nil> <nil>}
	I1028 04:40:14.647717    4886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 04:40:14.714436    4886 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-687000
	
	I1028 04:40:14.714451    4886 buildroot.go:166] provisioning hostname "running-upgrade-687000"
	I1028 04:40:14.714502    4886 main.go:141] libmachine: Using SSH client type: native
	I1028 04:40:14.714622    4886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f025f0] 0x102f04e30 <nil>  [] 0s} localhost 56996 <nil> <nil>}
	I1028 04:40:14.714628    4886 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-687000 && echo "running-upgrade-687000" | sudo tee /etc/hostname
	I1028 04:40:14.782350    4886 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-687000
	
	I1028 04:40:14.782417    4886 main.go:141] libmachine: Using SSH client type: native
	I1028 04:40:14.782518    4886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f025f0] 0x102f04e30 <nil>  [] 0s} localhost 56996 <nil> <nil>}
	I1028 04:40:14.782527    4886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-687000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-687000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-687000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 04:40:14.846301    4886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 04:40:14.846311    4886 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19876-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19876-1087/.minikube}
	I1028 04:40:14.846323    4886 buildroot.go:174] setting up certificates
	I1028 04:40:14.846328    4886 provision.go:84] configureAuth start
	I1028 04:40:14.846335    4886 provision.go:143] copyHostCerts
	I1028 04:40:14.846403    4886 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem, removing ...
	I1028 04:40:14.846413    4886 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem
	I1028 04:40:14.846537    4886 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem (1078 bytes)
	I1028 04:40:14.846727    4886 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem, removing ...
	I1028 04:40:14.846731    4886 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem
	I1028 04:40:14.846788    4886 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem (1123 bytes)
	I1028 04:40:14.846917    4886 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem, removing ...
	I1028 04:40:14.846921    4886 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem
	I1028 04:40:14.846980    4886 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem (1679 bytes)
	I1028 04:40:14.847071    4886 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-687000 san=[127.0.0.1 localhost minikube running-upgrade-687000]
	I1028 04:40:14.926331    4886 provision.go:177] copyRemoteCerts
	I1028 04:40:14.926399    4886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 04:40:14.926409    4886 sshutil.go:53] new ssh client: &{IP:localhost Port:56996 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/running-upgrade-687000/id_rsa Username:docker}
	I1028 04:40:14.962814    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 04:40:14.969704    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 04:40:14.976835    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 04:40:14.983824    4886 provision.go:87] duration metric: took 137.486041ms to configureAuth
	I1028 04:40:14.983832    4886 buildroot.go:189] setting minikube options for container-runtime
	I1028 04:40:14.983940    4886 config.go:182] Loaded profile config "running-upgrade-687000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:40:14.983989    4886 main.go:141] libmachine: Using SSH client type: native
	I1028 04:40:14.984080    4886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f025f0] 0x102f04e30 <nil>  [] 0s} localhost 56996 <nil> <nil>}
	I1028 04:40:14.984084    4886 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 04:40:15.052133    4886 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 04:40:15.052141    4886 buildroot.go:70] root file system type: tmpfs
	I1028 04:40:15.052189    4886 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 04:40:15.052249    4886 main.go:141] libmachine: Using SSH client type: native
	I1028 04:40:15.052357    4886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f025f0] 0x102f04e30 <nil>  [] 0s} localhost 56996 <nil> <nil>}
	I1028 04:40:15.052390    4886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 04:40:15.124780    4886 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 04:40:15.124844    4886 main.go:141] libmachine: Using SSH client type: native
	I1028 04:40:15.124957    4886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f025f0] 0x102f04e30 <nil>  [] 0s} localhost 56996 <nil> <nil>}
	I1028 04:40:15.124967    4886 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 04:40:15.193135    4886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 04:40:15.193146    4886 machine.go:96] duration metric: took 545.611458ms to provisionDockerMachine
	I1028 04:40:15.193152    4886 start.go:293] postStartSetup for "running-upgrade-687000" (driver="qemu2")
	I1028 04:40:15.193158    4886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 04:40:15.193215    4886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 04:40:15.193224    4886 sshutil.go:53] new ssh client: &{IP:localhost Port:56996 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/running-upgrade-687000/id_rsa Username:docker}
	I1028 04:40:15.228307    4886 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 04:40:15.229517    4886 info.go:137] Remote host: Buildroot 2021.02.12
	I1028 04:40:15.229523    4886 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/addons for local assets ...
	I1028 04:40:15.229601    4886 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/files for local assets ...
	I1028 04:40:15.229735    4886 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem -> 15982.pem in /etc/ssl/certs
	I1028 04:40:15.229899    4886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 04:40:15.232512    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /etc/ssl/certs/15982.pem (1708 bytes)
	I1028 04:40:15.239174    4886 start.go:296] duration metric: took 46.017458ms for postStartSetup
	I1028 04:40:15.239192    4886 fix.go:56] duration metric: took 605.031416ms for fixHost
	I1028 04:40:15.239235    4886 main.go:141] libmachine: Using SSH client type: native
	I1028 04:40:15.239336    4886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f025f0] 0x102f04e30 <nil>  [] 0s} localhost 56996 <nil> <nil>}
	I1028 04:40:15.239341    4886 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 04:40:15.303660    4886 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730115614.830301889
	
	I1028 04:40:15.303669    4886 fix.go:216] guest clock: 1730115614.830301889
	I1028 04:40:15.303674    4886 fix.go:229] Guest: 2024-10-28 04:40:14.830301889 -0700 PDT Remote: 2024-10-28 04:40:15.239194 -0700 PDT m=+0.715933043 (delta=-408.892111ms)
	I1028 04:40:15.303690    4886 fix.go:200] guest clock delta is within tolerance: -408.892111ms
	I1028 04:40:15.303693    4886 start.go:83] releasing machines lock for "running-upgrade-687000", held for 669.540333ms
	I1028 04:40:15.303769    4886 ssh_runner.go:195] Run: cat /version.json
	I1028 04:40:15.303781    4886 sshutil.go:53] new ssh client: &{IP:localhost Port:56996 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/running-upgrade-687000/id_rsa Username:docker}
	I1028 04:40:15.303769    4886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 04:40:15.303805    4886 sshutil.go:53] new ssh client: &{IP:localhost Port:56996 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/running-upgrade-687000/id_rsa Username:docker}
	W1028 04:40:15.304320    4886 sshutil.go:64] dial failure (will retry): dial tcp [::1]:56996: connect: connection refused
	I1028 04:40:15.304339    4886 retry.go:31] will retry after 312.427118ms: dial tcp [::1]:56996: connect: connection refused
	W1028 04:40:15.662090    4886 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1028 04:40:15.662193    4886 ssh_runner.go:195] Run: systemctl --version
	I1028 04:40:15.664794    4886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 04:40:15.666892    4886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 04:40:15.666937    4886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1028 04:40:15.670787    4886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1028 04:40:15.680702    4886 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 04:40:15.680717    4886 start.go:495] detecting cgroup driver to use...
	I1028 04:40:15.680789    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 04:40:15.686316    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1028 04:40:15.689533    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 04:40:15.692432    4886 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 04:40:15.692466    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 04:40:15.695277    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 04:40:15.698503    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 04:40:15.701574    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 04:40:15.704812    4886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 04:40:15.707699    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 04:40:15.710640    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 04:40:15.714191    4886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 04:40:15.717503    4886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 04:40:15.720140    4886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 04:40:15.722928    4886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:40:15.802990    4886 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 04:40:15.814104    4886 start.go:495] detecting cgroup driver to use...
	I1028 04:40:15.814175    4886 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 04:40:15.819235    4886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 04:40:15.825487    4886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 04:40:15.831277    4886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 04:40:15.836026    4886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 04:40:15.840396    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 04:40:15.845826    4886 ssh_runner.go:195] Run: which cri-dockerd
	I1028 04:40:15.846976    4886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 04:40:15.849710    4886 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1028 04:40:15.854372    4886 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 04:40:15.940066    4886 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 04:40:16.023857    4886 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 04:40:16.023913    4886 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 04:40:16.029260    4886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:40:16.103217    4886 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 04:40:19.540333    4886 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.437078791s)
	I1028 04:40:19.540420    4886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 04:40:19.545301    4886 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1028 04:40:19.551906    4886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 04:40:19.556789    4886 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 04:40:19.640575    4886 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 04:40:19.706248    4886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:40:19.772131    4886 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 04:40:19.777765    4886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 04:40:19.782480    4886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:40:19.853412    4886 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 04:40:19.892347    4886 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 04:40:19.892446    4886 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 04:40:19.894685    4886 start.go:563] Will wait 60s for crictl version
	I1028 04:40:19.894762    4886 ssh_runner.go:195] Run: which crictl
	I1028 04:40:19.896087    4886 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 04:40:19.908291    4886 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1028 04:40:19.908371    4886 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 04:40:19.920712    4886 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 04:40:19.936376    4886 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1028 04:40:19.936546    4886 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1028 04:40:19.937929    4886 kubeadm.go:883] updating cluster {Name:running-upgrade-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57028 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1028 04:40:19.937971    4886 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 04:40:19.938016    4886 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 04:40:19.948332    4886 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 04:40:19.948340    4886 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 04:40:19.948398    4886 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 04:40:19.951399    4886 ssh_runner.go:195] Run: which lz4
	I1028 04:40:19.952753    4886 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 04:40:19.954053    4886 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 04:40:19.954068    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1028 04:40:20.896115    4886 docker.go:653] duration metric: took 943.400708ms to copy over tarball
	I1028 04:40:20.896186    4886 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 04:40:22.107154    4886 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.210949459s)
	I1028 04:40:22.107171    4886 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 04:40:22.123160    4886 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 04:40:22.126368    4886 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1028 04:40:22.131647    4886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:40:22.194817    4886 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 04:40:23.378118    4886 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.183278333s)
	I1028 04:40:23.378213    4886 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 04:40:23.397610    4886 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 04:40:23.397621    4886 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 04:40:23.397626    4886 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 04:40:23.402302    4886 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:40:23.404139    4886 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:40:23.405823    4886 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:40:23.405843    4886 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:40:23.408112    4886 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:40:23.408220    4886 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:40:23.409551    4886 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:40:23.409658    4886 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:40:23.410412    4886 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:40:23.410994    4886 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:40:23.412152    4886 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 04:40:23.412300    4886 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:40:23.413291    4886 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:40:23.413632    4886 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:40:23.414218    4886 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 04:40:23.415162    4886 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:40:23.889259    4886 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:40:23.901069    4886 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1028 04:40:23.901105    4886 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:40:23.901167    4886 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:40:23.911914    4886 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1028 04:40:23.937615    4886 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:40:23.948312    4886 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1028 04:40:23.948348    4886 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:40:23.948407    4886 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:40:23.949341    4886 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:40:23.965436    4886 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1028 04:40:23.968016    4886 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1028 04:40:23.968038    4886 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:40:23.968101    4886 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:40:23.978534    4886 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1028 04:40:24.004812    4886 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:40:24.015643    4886 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1028 04:40:24.015664    4886 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:40:24.015728    4886 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:40:24.025582    4886 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1028 04:40:24.083801    4886 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 04:40:24.095361    4886 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1028 04:40:24.095378    4886 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:40:24.095436    4886 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1028 04:40:24.103245    4886 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 04:40:24.105733    4886 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1028 04:40:24.113661    4886 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1028 04:40:24.113685    4886 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1028 04:40:24.113748    4886 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1028 04:40:24.123988    4886 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1028 04:40:24.124131    4886 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 04:40:24.125737    4886 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1028 04:40:24.125751    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1028 04:40:24.133834    4886 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 04:40:24.133841    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1028 04:40:24.163709    4886 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1028 04:40:24.210385    4886 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1028 04:40:24.210547    4886 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:40:24.221450    4886 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1028 04:40:24.221481    4886 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:40:24.221557    4886 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:40:24.231673    4886 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 04:40:24.231818    4886 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 04:40:24.233383    4886 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1028 04:40:24.233397    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W1028 04:40:24.259198    4886 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1028 04:40:24.259307    4886 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:40:24.277098    4886 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 04:40:24.277113    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1028 04:40:24.283021    4886 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1028 04:40:24.283045    4886 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:40:24.283107    4886 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:40:24.322824    4886 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 04:40:25.289754    4886 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.006610334s)
	I1028 04:40:25.289804    4886 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 04:40:25.290221    4886 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 04:40:25.294838    4886 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1028 04:40:25.294895    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1028 04:40:25.353414    4886 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 04:40:25.353428    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1028 04:40:25.582896    4886 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 04:40:25.582937    4886 cache_images.go:92] duration metric: took 2.185283042s to LoadCachedImages
	W1028 04:40:25.582983    4886 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1028 04:40:25.582989    4886 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1028 04:40:25.583042    4886 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-687000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 04:40:25.583110    4886 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 04:40:25.596371    4886 cni.go:84] Creating CNI manager for ""
	I1028 04:40:25.596383    4886 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:40:25.596395    4886 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 04:40:25.596407    4886 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-687000 NodeName:running-upgrade-687000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 04:40:25.596482    4886 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-687000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 04:40:25.596551    4886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1028 04:40:25.599502    4886 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 04:40:25.599551    4886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 04:40:25.602675    4886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1028 04:40:25.608016    4886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 04:40:25.612788    4886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1028 04:40:25.618596    4886 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1028 04:40:25.620153    4886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:40:25.684053    4886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 04:40:25.689198    4886 certs.go:68] Setting up /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000 for IP: 10.0.2.15
	I1028 04:40:25.689206    4886 certs.go:194] generating shared ca certs ...
	I1028 04:40:25.689214    4886 certs.go:226] acquiring lock for ca certs: {Name:mk8f0a455373409f6ac5dde02ca67c613058d85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:40:25.689397    4886 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key
	I1028 04:40:25.689456    4886 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key
	I1028 04:40:25.689462    4886 certs.go:256] generating profile certs ...
	I1028 04:40:25.689558    4886 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/client.key
	I1028 04:40:25.689576    4886 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.key.f5b243a0
	I1028 04:40:25.689585    4886 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.crt.f5b243a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1028 04:40:25.784220    4886 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.crt.f5b243a0 ...
	I1028 04:40:25.784229    4886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.crt.f5b243a0: {Name:mk969568692eb321acd851df64353c53722ee02b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:40:25.784538    4886 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.key.f5b243a0 ...
	I1028 04:40:25.784543    4886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.key.f5b243a0: {Name:mkfacab428a073754b9929f28ed009e7f69ce017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:40:25.784762    4886 certs.go:381] copying /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.crt.f5b243a0 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.crt
	I1028 04:40:25.784911    4886 certs.go:385] copying /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.key.f5b243a0 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.key
	I1028 04:40:25.785085    4886 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/proxy-client.key
	I1028 04:40:25.785227    4886 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem (1338 bytes)
	W1028 04:40:25.785268    4886 certs.go:480] ignoring /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598_empty.pem, impossibly tiny 0 bytes
	I1028 04:40:25.785275    4886 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 04:40:25.785309    4886 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem (1078 bytes)
	I1028 04:40:25.785341    4886 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem (1123 bytes)
	I1028 04:40:25.785371    4886 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem (1679 bytes)
	I1028 04:40:25.785435    4886 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem (1708 bytes)
	I1028 04:40:25.785810    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 04:40:25.793445    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 04:40:25.800946    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 04:40:25.808937    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 04:40:25.815733    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 04:40:25.822541    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 04:40:25.829666    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 04:40:25.837457    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 04:40:25.844770    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 04:40:25.852259    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem --> /usr/share/ca-certificates/1598.pem (1338 bytes)
	I1028 04:40:25.858986    4886 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /usr/share/ca-certificates/15982.pem (1708 bytes)
	I1028 04:40:25.866259    4886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 04:40:25.871499    4886 ssh_runner.go:195] Run: openssl version
	I1028 04:40:25.873425    4886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1598.pem && ln -fs /usr/share/ca-certificates/1598.pem /etc/ssl/certs/1598.pem"
	I1028 04:40:25.876354    4886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1598.pem
	I1028 04:40:25.877850    4886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:47 /usr/share/ca-certificates/1598.pem
	I1028 04:40:25.877872    4886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1598.pem
	I1028 04:40:25.879926    4886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1598.pem /etc/ssl/certs/51391683.0"
	I1028 04:40:25.882795    4886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15982.pem && ln -fs /usr/share/ca-certificates/15982.pem /etc/ssl/certs/15982.pem"
	I1028 04:40:25.886330    4886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15982.pem
	I1028 04:40:25.887966    4886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:47 /usr/share/ca-certificates/15982.pem
	I1028 04:40:25.887995    4886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15982.pem
	I1028 04:40:25.889849    4886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15982.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 04:40:25.892906    4886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 04:40:25.895962    4886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:40:25.897503    4886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:40 /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:40:25.897532    4886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:40:25.899439    4886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 04:40:25.902637    4886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 04:40:25.904259    4886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 04:40:25.906124    4886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 04:40:25.907978    4886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 04:40:25.909764    4886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 04:40:25.912181    4886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 04:40:25.913990    4886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 04:40:25.915714    4886 kubeadm.go:392] StartCluster: {Name:running-upgrade-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57028 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:40:25.915793    4886 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 04:40:25.926157    4886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 04:40:25.929900    4886 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 04:40:25.929908    4886 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 04:40:25.929940    4886 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 04:40:25.932968    4886 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 04:40:25.933218    4886 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-687000" does not appear in /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:40:25.933270    4886 kubeconfig.go:62] /Users/jenkins/minikube-integration/19876-1087/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-687000" cluster setting kubeconfig missing "running-upgrade-687000" context setting]
	I1028 04:40:25.933410    4886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:40:25.934135    4886 kapi.go:59] client config for running-upgrade-687000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/client.key", CAFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10495e680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 04:40:25.934488    4886 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 04:40:25.937468    4886 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-687000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1028 04:40:25.937473    4886 kubeadm.go:1160] stopping kube-system containers ...
	I1028 04:40:25.937519    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 04:40:25.948863    4886 docker.go:483] Stopping containers: [4dce65c33ca1 cc0b6ba396db eaf74f906b42 9e3fe090e3aa ca8fcda7966e fd3c7b0d4f64 bcfc81110588 9f9ab9b78d6b 75a5b2c97382 e35b57d77161 de37a79bc05f ce6612e0b11c 5534709ba7b8]
	I1028 04:40:25.948934    4886 ssh_runner.go:195] Run: docker stop 4dce65c33ca1 cc0b6ba396db eaf74f906b42 9e3fe090e3aa ca8fcda7966e fd3c7b0d4f64 bcfc81110588 9f9ab9b78d6b 75a5b2c97382 e35b57d77161 de37a79bc05f ce6612e0b11c 5534709ba7b8
	I1028 04:40:25.960090    4886 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 04:40:26.047440    4886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 04:40:26.051417    4886 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 28 11:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 28 11:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 28 11:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 28 11:40 /etc/kubernetes/scheduler.conf
	
	I1028 04:40:26.051461    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/admin.conf
	I1028 04:40:26.054865    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 04:40:26.054901    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 04:40:26.058457    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/kubelet.conf
	I1028 04:40:26.061586    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 04:40:26.061625    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 04:40:26.064902    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/controller-manager.conf
	I1028 04:40:26.067504    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 04:40:26.067532    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 04:40:26.070405    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/scheduler.conf
	I1028 04:40:26.073525    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 04:40:26.073559    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 04:40:26.076423    4886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 04:40:26.079097    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:40:26.100508    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:40:26.671878    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:40:26.861462    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:40:26.885139    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:40:26.908416    4886 api_server.go:52] waiting for apiserver process to appear ...
	I1028 04:40:26.908511    4886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:40:27.410704    4886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:40:27.910548    4886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:40:27.914822    4886 api_server.go:72] duration metric: took 1.006401542s to wait for apiserver process to appear ...
	I1028 04:40:27.914833    4886 api_server.go:88] waiting for apiserver healthz status ...
	I1028 04:40:27.914855    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:40:32.916999    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:40:32.917037    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:40:37.917489    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:40:37.917552    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:40:42.918338    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:40:42.918439    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:40:47.919492    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:40:47.919598    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:40:52.921281    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:40:52.921378    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:40:57.923556    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:40:57.923662    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:02.925255    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:02.925343    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:07.926915    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:07.927060    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:12.928877    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:12.928982    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:17.931118    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:17.931213    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:22.933549    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:22.933641    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:27.936081    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:27.936644    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:41:27.973293    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:41:27.973448    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:41:27.994353    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:41:27.994482    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:41:28.011020    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:41:28.011110    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:41:28.025432    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:41:28.025511    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:41:28.036314    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:41:28.036379    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:41:28.047065    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:41:28.047140    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:41:28.058634    4886 logs.go:282] 0 containers: []
	W1028 04:41:28.058645    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:41:28.058716    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:41:28.069696    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:41:28.069716    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:41:28.069721    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:41:28.082068    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:41:28.082079    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:41:28.086835    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:41:28.086844    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:41:28.105009    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:41:28.105022    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:41:28.119914    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:41:28.119926    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:41:28.135877    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:41:28.135890    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:41:28.160250    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:41:28.160260    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:41:28.253250    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:41:28.253266    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:41:28.267235    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:41:28.267248    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:41:28.279062    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:41:28.279079    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:41:28.317964    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:41:28.317976    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:41:28.333024    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:41:28.333034    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:41:28.344621    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:41:28.344633    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:41:28.356124    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:41:28.356137    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:41:28.381398    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:41:28.381410    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:41:28.398870    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:41:28.398883    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:41:28.411086    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:41:28.411101    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:41:30.924969    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:35.926641    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:35.927176    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:41:35.970689    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:41:35.970845    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:41:35.998463    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:41:35.998578    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:41:36.012982    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:41:36.013070    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:41:36.024786    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:41:36.024871    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:41:36.035402    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:41:36.035481    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:41:36.053461    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:41:36.053536    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:41:36.064091    4886 logs.go:282] 0 containers: []
	W1028 04:41:36.064104    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:41:36.064171    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:41:36.074596    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:41:36.074625    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:41:36.074635    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:41:36.089059    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:41:36.089073    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:41:36.103629    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:41:36.103641    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:41:36.115806    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:41:36.115821    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:41:36.127535    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:41:36.127545    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:41:36.151750    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:41:36.151758    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:41:36.189918    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:41:36.189932    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:41:36.201966    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:41:36.201975    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:41:36.217910    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:41:36.217922    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:41:36.235652    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:41:36.235661    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:41:36.247605    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:41:36.247619    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:41:36.275917    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:41:36.275925    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:41:36.290641    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:41:36.290655    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:41:36.302516    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:41:36.302525    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:41:36.307359    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:41:36.307366    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:41:36.318780    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:41:36.318789    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:41:36.333961    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:41:36.333970    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:41:38.873433    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:43.875822    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:43.876343    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:41:43.914878    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:41:43.915018    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:41:43.936358    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:41:43.936481    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:41:43.951877    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:41:43.951953    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:41:43.964758    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:41:43.964842    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:41:43.975523    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:41:43.975610    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:41:43.986686    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:41:43.986763    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:41:43.997566    4886 logs.go:282] 0 containers: []
	W1028 04:41:43.997579    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:41:43.997637    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:41:44.008400    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:41:44.008431    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:41:44.008436    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:41:44.023034    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:41:44.023046    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:41:44.034310    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:41:44.034320    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:41:44.073312    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:41:44.073322    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:41:44.087393    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:41:44.087406    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:41:44.105368    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:41:44.105382    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:41:44.117187    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:41:44.117202    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:41:44.152278    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:41:44.152290    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:41:44.164581    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:41:44.164595    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:41:44.175495    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:41:44.175506    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:41:44.187109    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:41:44.187121    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:41:44.198994    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:41:44.199007    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:41:44.225783    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:41:44.225793    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:41:44.241793    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:41:44.241804    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:41:44.253393    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:41:44.253404    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:41:44.279790    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:41:44.279798    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:41:44.284298    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:41:44.284304    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:41:46.799801    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:51.802567    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:51.803196    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:41:51.842712    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:41:51.842871    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:41:51.865001    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:41:51.865129    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:41:51.881416    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:41:51.881505    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:41:51.900249    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:41:51.900337    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:41:51.915620    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:41:51.915698    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:41:51.930613    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:41:51.930693    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:41:51.941455    4886 logs.go:282] 0 containers: []
	W1028 04:41:51.941471    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:41:51.941540    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:41:51.952388    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:41:51.952407    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:41:51.952413    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:41:51.964654    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:41:51.964666    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:41:51.976891    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:41:51.976904    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:41:51.993435    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:41:51.993448    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:41:52.004993    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:41:52.005004    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:41:52.019369    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:41:52.019381    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:41:52.038052    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:41:52.038064    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:41:52.049508    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:41:52.049520    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:41:52.090951    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:41:52.090962    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:41:52.106665    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:41:52.106677    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:41:52.118370    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:41:52.118383    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:41:52.143083    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:41:52.143096    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:41:52.147332    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:41:52.147340    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:41:52.182290    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:41:52.182303    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:41:52.197585    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:41:52.197599    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:41:52.211056    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:41:52.211070    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:41:52.234820    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:41:52.234833    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:41:54.752768    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:41:59.755330    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:41:59.755902    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:41:59.796152    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:41:59.796306    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:41:59.818139    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:41:59.818279    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:41:59.833678    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:41:59.833762    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:41:59.846669    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:41:59.846754    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:41:59.857857    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:41:59.857935    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:41:59.868674    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:41:59.868753    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:41:59.879306    4886 logs.go:282] 0 containers: []
	W1028 04:41:59.879318    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:41:59.879382    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:41:59.889767    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:41:59.889789    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:41:59.889795    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:41:59.929030    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:41:59.929040    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:41:59.943306    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:41:59.943317    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:41:59.963912    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:41:59.963922    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:41:59.977025    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:41:59.977037    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:41:59.990064    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:41:59.990076    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:42:00.002561    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:42:00.002573    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:42:00.007260    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:42:00.007269    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:42:00.042572    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:42:00.042587    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:42:00.067718    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:42:00.067730    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:42:00.079418    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:42:00.079431    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:42:00.093520    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:42:00.093532    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:42:00.109143    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:42:00.109154    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:42:00.120703    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:42:00.120713    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:42:00.146583    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:42:00.146590    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:42:00.158587    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:42:00.158598    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:42:00.170465    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:42:00.170474    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:42:02.690071    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:42:07.691401    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:42:07.691878    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:42:07.730444    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:42:07.730605    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:42:07.756715    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:42:07.756829    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:42:07.770835    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:42:07.770920    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:42:07.782771    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:42:07.782853    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:42:07.793091    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:42:07.793160    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:42:07.803645    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:42:07.803730    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:42:07.813869    4886 logs.go:282] 0 containers: []
	W1028 04:42:07.813885    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:42:07.813944    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:42:07.824677    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:42:07.824696    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:42:07.824701    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:42:07.855577    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:42:07.855590    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:42:07.868196    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:42:07.868209    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:42:07.891968    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:42:07.891975    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:42:07.906476    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:42:07.906489    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:42:07.920646    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:42:07.920660    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:42:07.935297    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:42:07.935306    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:42:07.947149    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:42:07.947163    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:42:07.959093    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:42:07.959105    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:42:07.998130    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:42:07.998141    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:42:08.033600    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:42:08.033612    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:42:08.048788    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:42:08.048798    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:42:08.066015    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:42:08.066026    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:42:08.077823    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:42:08.077832    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:42:08.082706    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:42:08.082713    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:42:08.094642    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:42:08.094655    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:42:08.109811    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:42:08.109822    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:42:10.621755    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:42:15.624636    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:42:15.625201    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:42:15.662837    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:42:15.662996    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:42:15.685074    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:42:15.685200    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:42:15.699879    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:42:15.699958    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:42:15.712661    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:42:15.712750    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:42:15.724852    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:42:15.724928    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:42:15.736566    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:42:15.736649    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:42:15.747308    4886 logs.go:282] 0 containers: []
	W1028 04:42:15.747324    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:42:15.747392    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:42:15.758560    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:42:15.758575    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:42:15.758583    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:42:15.770955    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:42:15.770969    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:42:15.796649    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:42:15.796659    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:42:15.812212    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:42:15.812224    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:42:15.829849    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:42:15.829858    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:42:15.841821    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:42:15.841832    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:42:15.854005    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:42:15.854020    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:42:15.894962    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:42:15.894972    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:42:15.899484    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:42:15.899492    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:42:15.913536    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:42:15.913548    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:42:15.929612    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:42:15.929623    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:42:15.941670    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:42:15.941684    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:42:15.966649    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:42:15.966657    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:42:15.978580    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:42:15.978590    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:42:16.013522    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:42:16.013536    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:42:16.027850    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:42:16.027859    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:42:16.042216    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:42:16.042229    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:42:18.558267    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:42:23.560660    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:42:23.561236    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:42:23.599609    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:42:23.599764    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:42:23.621116    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:42:23.621234    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:42:23.639068    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:42:23.639153    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:42:23.651900    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:42:23.651974    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:42:23.663427    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:42:23.663504    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:42:23.674504    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:42:23.674584    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:42:23.685037    4886 logs.go:282] 0 containers: []
	W1028 04:42:23.685049    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:42:23.685110    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:42:23.695971    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:42:23.695989    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:42:23.695995    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:42:23.710843    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:42:23.710857    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:42:23.722113    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:42:23.722125    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:42:23.747544    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:42:23.747551    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:42:23.758713    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:42:23.758726    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:42:23.770653    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:42:23.770667    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:42:23.775207    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:42:23.775213    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:42:23.790243    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:42:23.790254    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:42:23.804869    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:42:23.804879    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:42:23.820598    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:42:23.820608    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:42:23.831688    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:42:23.831699    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:42:23.843454    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:42:23.843465    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:42:23.878867    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:42:23.878884    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:42:23.894069    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:42:23.894079    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:42:23.908705    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:42:23.908715    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:42:23.949845    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:42:23.949852    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:42:23.974862    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:42:23.974873    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:42:26.494492    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:42:31.496011    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:42:31.496539    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:42:31.529886    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:42:31.530025    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:42:31.550652    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:42:31.550778    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:42:31.565247    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:42:31.565335    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:42:31.577197    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:42:31.577270    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:42:31.587921    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:42:31.587998    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:42:31.598766    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:42:31.598845    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:42:31.609496    4886 logs.go:282] 0 containers: []
	W1028 04:42:31.609508    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:42:31.609564    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:42:31.620479    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:42:31.620502    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:42:31.620507    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:42:31.636270    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:42:31.636282    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:42:31.650010    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:42:31.650019    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:42:31.661489    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:42:31.661499    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:42:31.685743    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:42:31.685754    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:42:31.699517    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:42:31.699531    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:42:31.711031    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:42:31.711043    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:42:31.723154    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:42:31.723165    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:42:31.761914    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:42:31.761921    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:42:31.765912    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:42:31.765917    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:42:31.784180    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:42:31.784189    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:42:31.795360    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:42:31.795369    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:42:31.809307    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:42:31.809321    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:42:31.824019    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:42:31.824030    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:42:31.848858    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:42:31.848868    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:42:31.897815    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:42:31.897828    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:42:31.918434    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:42:31.918445    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:42:34.432544    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:42:39.435416    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:42:39.435661    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:42:39.460244    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:42:39.460386    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:42:39.475660    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:42:39.475760    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:42:39.492592    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:42:39.492665    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:42:39.503186    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:42:39.503257    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:42:39.513987    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:42:39.514056    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:42:39.525112    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:42:39.525180    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:42:39.540186    4886 logs.go:282] 0 containers: []
	W1028 04:42:39.540198    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:42:39.540254    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:42:39.550687    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:42:39.550706    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:42:39.550711    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:42:39.555444    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:42:39.555451    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:42:39.591719    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:42:39.591730    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:42:39.616309    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:42:39.616322    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:42:39.633300    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:42:39.633310    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:42:39.650951    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:42:39.650965    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:42:39.665793    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:42:39.665806    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:42:39.676827    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:42:39.676837    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:42:39.688081    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:42:39.688092    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:42:39.699718    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:42:39.699731    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:42:39.713443    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:42:39.713452    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:42:39.725580    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:42:39.725590    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:42:39.740105    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:42:39.740113    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:42:39.763558    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:42:39.763568    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:42:39.775126    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:42:39.775138    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:42:39.814480    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:42:39.814487    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:42:39.829250    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:42:39.829264    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:42:42.342727    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:42:47.345443    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:42:47.345586    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:42:47.357247    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:42:47.357332    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:42:47.368177    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:42:47.368266    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:42:47.379331    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:42:47.379407    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:42:47.390412    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:42:47.390483    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:42:47.401633    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:42:47.401704    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:42:47.412241    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:42:47.412308    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:42:47.423040    4886 logs.go:282] 0 containers: []
	W1028 04:42:47.423051    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:42:47.423109    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:42:47.434048    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:42:47.434066    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:42:47.434072    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:42:47.469565    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:42:47.469575    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:42:47.488071    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:42:47.488082    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:42:47.529872    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:42:47.529880    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:42:47.541995    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:42:47.542006    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:42:47.554323    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:42:47.554333    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:42:47.566269    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:42:47.566281    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:42:47.585216    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:42:47.585225    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:42:47.608560    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:42:47.608571    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:42:47.627145    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:42:47.627155    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:42:47.631509    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:42:47.631516    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:42:47.668293    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:42:47.668307    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:42:47.683571    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:42:47.683582    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:42:47.697434    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:42:47.697444    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:42:47.712046    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:42:47.712056    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:42:47.723804    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:42:47.723816    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:42:47.735720    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:42:47.735731    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:42:50.263849    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:42:55.266218    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:42:55.266831    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:42:55.306059    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:42:55.306219    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:42:55.327490    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:42:55.327605    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:42:55.342915    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:42:55.343008    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:42:55.363888    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:42:55.363970    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:42:55.374630    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:42:55.374704    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:42:55.385653    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:42:55.385723    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:42:55.396282    4886 logs.go:282] 0 containers: []
	W1028 04:42:55.396295    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:42:55.396365    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:42:55.407314    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:42:55.407331    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:42:55.407338    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:42:55.419837    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:42:55.419850    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:42:55.424183    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:42:55.424190    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:42:55.438659    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:42:55.438669    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:42:55.450264    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:42:55.450276    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:42:55.461455    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:42:55.461469    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:42:55.475567    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:42:55.475580    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:42:55.493174    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:42:55.493184    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:42:55.533488    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:42:55.533498    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:42:55.568269    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:42:55.568280    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:42:55.593103    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:42:55.593115    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:42:55.607863    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:42:55.607872    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:42:55.622667    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:42:55.622677    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:42:55.636387    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:42:55.636396    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:42:55.657723    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:42:55.657732    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:42:55.669536    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:42:55.669546    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:42:55.694833    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:42:55.694839    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:42:58.208656    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:03.211364    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:03.211510    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:03.225568    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:03.225681    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:03.237152    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:03.237238    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:03.248060    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:03.248144    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:03.260348    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:03.260457    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:03.272943    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:03.273032    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:03.284046    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:03.284142    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:03.295085    4886 logs.go:282] 0 containers: []
	W1028 04:43:03.295098    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:03.295212    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:03.307550    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:03.307567    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:03.307573    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:03.321685    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:03.321695    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:03.333349    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:03.333361    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:03.350640    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:03.350651    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:03.385824    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:03.385835    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:03.413310    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:03.413327    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:03.426097    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:03.426109    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:03.439261    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:03.439274    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:03.454903    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:03.454917    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:03.471176    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:03.471190    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:03.482428    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:03.482440    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:03.493990    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:03.494001    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:03.536516    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:03.536537    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:03.544889    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:03.544905    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:03.558335    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:03.558348    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:03.574602    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:03.574613    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:03.586866    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:03.586877    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:06.113406    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:11.115765    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:11.115916    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:11.128269    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:11.128352    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:11.140109    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:11.140192    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:11.151713    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:11.151799    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:11.163442    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:11.163533    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:11.176360    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:11.176443    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:11.189063    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:11.189148    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:11.201600    4886 logs.go:282] 0 containers: []
	W1028 04:43:11.201632    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:11.201703    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:11.219063    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:11.219085    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:11.219091    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:11.263229    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:11.263250    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:11.305277    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:11.305288    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:11.321318    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:11.321330    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:11.341149    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:11.341164    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:11.354312    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:11.354326    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:11.358998    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:11.359010    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:11.374908    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:11.374920    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:11.388279    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:11.388294    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:11.403526    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:11.403544    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:11.419503    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:11.419519    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:11.431902    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:11.431915    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:11.448135    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:11.448147    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:11.463050    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:11.463062    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:11.490782    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:11.490799    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:11.504661    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:11.504674    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:11.518178    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:11.518191    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:14.047874    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:19.050330    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:19.050606    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:19.081250    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:19.081366    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:19.096243    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:19.096335    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:19.108800    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:19.108878    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:19.119313    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:19.119396    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:19.129726    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:19.129798    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:19.140108    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:19.140177    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:19.150474    4886 logs.go:282] 0 containers: []
	W1028 04:43:19.150485    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:19.150546    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:19.161819    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:19.161838    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:19.161843    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:19.175990    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:19.176001    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:19.187336    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:19.187347    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:19.204646    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:19.204658    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:19.216159    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:19.216170    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:19.239917    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:19.239926    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:19.253654    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:19.253667    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:19.268716    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:19.268726    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:19.280070    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:19.280083    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:19.314310    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:19.314320    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:19.326731    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:19.326741    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:19.338227    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:19.338237    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:19.350154    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:19.350164    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:19.375199    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:19.375209    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:19.379756    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:19.379764    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:19.394224    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:19.394233    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:19.405409    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:19.405421    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:21.946624    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:26.949325    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:26.949481    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:26.967400    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:26.967477    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:26.981261    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:26.981342    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:26.994229    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:26.994330    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:27.005975    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:27.006055    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:27.020128    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:27.020208    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:27.030885    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:27.030968    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:27.041171    4886 logs.go:282] 0 containers: []
	W1028 04:43:27.041182    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:27.041248    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:27.054050    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:27.054068    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:27.054075    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:27.071091    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:27.071102    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:27.085632    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:27.085645    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:27.101524    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:27.101535    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:27.116223    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:27.116235    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:27.128261    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:27.128275    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:27.139936    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:27.139948    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:27.175317    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:27.175329    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:27.192594    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:27.192604    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:27.206350    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:27.206362    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:27.231253    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:27.231261    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:27.257150    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:27.257163    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:27.269040    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:27.269050    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:27.284671    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:27.284680    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:27.289180    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:27.289187    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:27.300260    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:27.300269    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:27.311993    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:27.312003    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:29.855462    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:34.858151    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:34.858987    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:34.900944    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:34.901111    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:34.923742    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:34.923882    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:34.939915    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:34.940006    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:34.952131    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:34.952214    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:34.967451    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:34.967532    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:34.978276    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:34.978362    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:34.989894    4886 logs.go:282] 0 containers: []
	W1028 04:43:34.989912    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:34.989979    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:35.000194    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:35.000222    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:35.000228    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:35.012272    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:35.012283    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:35.049844    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:35.049858    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:35.064389    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:35.064405    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:35.089151    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:35.089165    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:35.104138    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:35.104151    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:35.122723    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:35.122735    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:35.135338    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:35.135355    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:35.146890    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:35.146901    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:35.158780    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:35.158793    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:35.170467    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:35.170480    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:35.187841    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:35.187853    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:35.200504    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:35.200517    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:35.218787    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:35.218801    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:35.242977    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:35.242988    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:35.283624    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:35.283634    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:35.287910    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:35.287915    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:37.804040    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:42.806232    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:42.806366    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:42.824667    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:42.824754    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:42.836642    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:42.836725    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:42.848774    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:42.848860    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:42.859508    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:42.859588    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:42.872862    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:42.872948    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:42.883162    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:42.883247    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:42.894082    4886 logs.go:282] 0 containers: []
	W1028 04:43:42.894096    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:42.894163    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:42.904405    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:42.904423    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:42.904428    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:42.945444    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:42.945455    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:42.970185    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:42.970202    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:42.981707    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:42.981722    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:43.006302    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:43.006312    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:43.020285    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:43.020302    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:43.031947    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:43.031957    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:43.043150    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:43.043161    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:43.054909    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:43.054919    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:43.067168    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:43.067178    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:43.102846    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:43.102856    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:43.121361    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:43.121369    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:43.136175    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:43.136185    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:43.147989    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:43.147998    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:43.152384    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:43.152389    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:43.167661    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:43.167671    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:43.179441    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:43.179450    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:45.699247    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:50.702149    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:50.702732    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:50.738219    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:50.738373    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:50.759507    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:50.759598    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:50.774435    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:50.774529    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:50.790983    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:50.791068    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:50.813054    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:50.813144    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:50.837124    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:50.837214    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:50.860788    4886 logs.go:282] 0 containers: []
	W1028 04:43:50.860805    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:50.860867    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:50.871833    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:50.871853    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:50.871858    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:50.888727    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:50.888740    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:50.913998    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:50.914009    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:50.926383    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:50.926396    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:50.969516    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:50.969530    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:50.974326    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:50.974337    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:50.989320    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:50.989331    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:51.001165    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:51.001180    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:51.016724    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:51.016734    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:51.029025    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:51.029035    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:51.046109    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:51.046124    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:51.058033    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:51.058043    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:51.071775    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:51.071790    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:51.083694    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:51.083705    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:51.120028    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:51.120040    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:51.134344    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:51.134354    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:51.161105    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:51.161113    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:53.675385    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:58.676414    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:58.676548    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:58.689976    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:58.690047    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:58.704979    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:58.705053    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:58.716179    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:58.716245    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:58.727375    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:58.727441    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:58.739484    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:58.739556    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:58.750208    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:58.750282    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:58.760682    4886 logs.go:282] 0 containers: []
	W1028 04:43:58.760693    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:58.760758    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:58.771703    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:58.771720    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:58.771726    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:58.784532    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:58.784544    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:58.802656    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:58.802671    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:58.818807    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:58.818820    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:58.823800    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:58.823807    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:58.837682    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:58.837692    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:58.852391    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:58.852405    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:58.868715    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:58.868724    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:58.881921    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:58.881932    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:58.895014    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:58.895026    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:58.920233    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:58.920243    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:58.947817    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:58.947831    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:58.992315    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:58.992331    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:59.031176    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:59.031189    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:59.048766    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:59.048780    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:59.062628    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:59.062638    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:59.076330    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:59.076343    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:44:01.597388    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:06.599742    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:06.600236    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:44:06.640827    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:44:06.640989    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:44:06.661859    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:44:06.661972    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:44:06.677355    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:44:06.677447    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:44:06.690159    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:44:06.690243    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:44:06.701162    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:44:06.701237    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:44:06.712359    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:44:06.712438    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:44:06.723873    4886 logs.go:282] 0 containers: []
	W1028 04:44:06.723888    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:44:06.723958    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:44:06.735483    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:44:06.735500    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:44:06.735505    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:44:06.775755    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:44:06.775775    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:44:06.793439    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:44:06.793449    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:44:06.808442    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:44:06.808452    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:44:06.829219    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:44:06.829231    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:44:06.863484    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:44:06.863494    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:44:06.877491    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:44:06.877507    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:44:06.889103    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:44:06.889118    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:44:06.900676    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:44:06.900687    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:44:06.912739    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:44:06.912750    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:44:06.928742    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:44:06.928757    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:44:06.964756    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:44:06.964766    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:44:06.980271    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:44:06.980283    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:44:06.992365    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:44:06.992377    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:44:07.015920    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:44:07.015944    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:44:07.020397    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:44:07.020406    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:44:07.032705    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:44:07.032720    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:44:09.544086    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:14.544451    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:14.544572    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:44:14.557430    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:44:14.557520    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:44:14.567961    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:44:14.568042    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:44:14.578887    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:44:14.578957    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:44:14.589562    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:44:14.589645    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:44:14.600097    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:44:14.600172    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:44:14.610980    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:44:14.611059    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:44:14.621548    4886 logs.go:282] 0 containers: []
	W1028 04:44:14.621560    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:44:14.621624    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:44:14.632670    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:44:14.632687    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:44:14.632694    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:44:14.650135    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:44:14.650146    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:44:14.662769    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:44:14.662780    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:44:14.675056    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:44:14.675067    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:44:14.687010    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:44:14.687020    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:44:14.726522    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:44:14.726538    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:44:14.764240    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:44:14.764252    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:44:14.793380    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:44:14.793392    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:44:14.812023    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:44:14.812034    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:44:14.827723    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:44:14.827735    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:44:14.840194    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:44:14.840207    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:44:14.852662    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:44:14.852675    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:44:14.867675    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:44:14.867689    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:44:14.879359    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:44:14.879371    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:44:14.903242    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:44:14.903250    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:44:14.907889    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:44:14.907897    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:44:14.922388    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:44:14.922399    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:44:17.436190    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:22.437731    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:22.437859    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:44:22.451604    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:44:22.451693    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:44:22.469372    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:44:22.469447    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:44:22.479801    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:44:22.479883    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:44:22.490349    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:44:22.490426    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:44:22.501965    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:44:22.502054    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:44:22.513661    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:44:22.513745    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:44:22.524459    4886 logs.go:282] 0 containers: []
	W1028 04:44:22.524477    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:44:22.524560    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:44:22.536165    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:44:22.536185    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:44:22.536191    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:44:22.540709    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:44:22.540715    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:44:22.554646    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:44:22.554660    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:44:22.594330    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:44:22.594340    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:44:22.614833    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:44:22.614843    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:44:22.628595    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:44:22.628606    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:44:22.640570    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:44:22.640583    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:44:22.675586    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:44:22.675603    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:44:22.686990    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:44:22.687006    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:44:22.701482    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:44:22.701493    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:44:22.725435    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:44:22.725451    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:44:22.766185    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:44:22.766194    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:44:22.780227    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:44:22.780241    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:44:22.795598    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:44:22.795617    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:44:22.813184    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:44:22.813195    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:44:22.825153    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:44:22.825168    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:44:22.836341    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:44:22.836352    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:44:25.352769    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:30.353826    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:30.353877    4886 kubeadm.go:597] duration metric: took 4m4.429027s to restartPrimaryControlPlane
	W1028 04:44:30.353932    4886 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 04:44:30.353954    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1028 04:44:31.373522    4886 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.019552042s)
	I1028 04:44:31.373792    4886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 04:44:31.378702    4886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 04:44:31.381462    4886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 04:44:31.384045    4886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 04:44:31.384054    4886 kubeadm.go:157] found existing configuration files:
	
	I1028 04:44:31.384088    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/admin.conf
	I1028 04:44:31.387013    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 04:44:31.387043    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 04:44:31.390149    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/kubelet.conf
	I1028 04:44:31.392694    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 04:44:31.392724    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 04:44:31.395687    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/controller-manager.conf
	I1028 04:44:31.398982    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 04:44:31.399018    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 04:44:31.402180    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/scheduler.conf
	I1028 04:44:31.404644    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 04:44:31.404676    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 04:44:31.407554    4886 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 04:44:31.425823    4886 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1028 04:44:31.425871    4886 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 04:44:31.476980    4886 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 04:44:31.477049    4886 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 04:44:31.477106    4886 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 04:44:31.529386    4886 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 04:44:31.532600    4886 out.go:235]   - Generating certificates and keys ...
	I1028 04:44:31.532635    4886 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 04:44:31.532669    4886 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 04:44:31.532708    4886 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 04:44:31.532742    4886 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 04:44:31.532842    4886 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 04:44:31.532870    4886 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 04:44:31.532912    4886 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 04:44:31.532943    4886 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 04:44:31.532982    4886 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 04:44:31.533023    4886 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 04:44:31.533043    4886 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 04:44:31.533103    4886 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 04:44:31.593458    4886 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 04:44:31.699508    4886 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 04:44:31.893138    4886 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 04:44:31.965012    4886 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 04:44:31.998472    4886 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 04:44:31.998777    4886 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 04:44:31.998818    4886 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 04:44:32.069143    4886 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 04:44:32.073444    4886 out.go:235]   - Booting up control plane ...
	I1028 04:44:32.073492    4886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 04:44:32.073529    4886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 04:44:32.073630    4886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 04:44:32.073704    4886 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 04:44:32.073831    4886 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 04:44:36.575667    4886 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502630 seconds
	I1028 04:44:36.575730    4886 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 04:44:36.579311    4886 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 04:44:37.099269    4886 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 04:44:37.099638    4886 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-687000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 04:44:37.604300    4886 kubeadm.go:310] [bootstrap-token] Using token: w7krdh.lvwpsl5dc8t7bk4m
	I1028 04:44:37.610935    4886 out.go:235]   - Configuring RBAC rules ...
	I1028 04:44:37.610994    4886 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 04:44:37.611040    4886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 04:44:37.617140    4886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 04:44:37.618627    4886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 04:44:37.620055    4886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 04:44:37.621126    4886 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 04:44:37.624548    4886 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 04:44:37.764610    4886 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 04:44:38.007861    4886 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 04:44:38.008237    4886 kubeadm.go:310] 
	I1028 04:44:38.008266    4886 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 04:44:38.008268    4886 kubeadm.go:310] 
	I1028 04:44:38.008315    4886 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 04:44:38.008321    4886 kubeadm.go:310] 
	I1028 04:44:38.008335    4886 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 04:44:38.008369    4886 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 04:44:38.008401    4886 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 04:44:38.008410    4886 kubeadm.go:310] 
	I1028 04:44:38.008437    4886 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 04:44:38.008440    4886 kubeadm.go:310] 
	I1028 04:44:38.008462    4886 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 04:44:38.008468    4886 kubeadm.go:310] 
	I1028 04:44:38.008497    4886 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 04:44:38.008584    4886 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 04:44:38.008631    4886 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 04:44:38.008633    4886 kubeadm.go:310] 
	I1028 04:44:38.008674    4886 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 04:44:38.008726    4886 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 04:44:38.008730    4886 kubeadm.go:310] 
	I1028 04:44:38.008770    4886 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w7krdh.lvwpsl5dc8t7bk4m \
	I1028 04:44:38.008821    4886 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b1828748577e93ccb806e0aae973ddbc82f94e1a1a028b415724a35e8cf5acf \
	I1028 04:44:38.008851    4886 kubeadm.go:310] 	--control-plane 
	I1028 04:44:38.008857    4886 kubeadm.go:310] 
	I1028 04:44:38.008929    4886 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 04:44:38.008951    4886 kubeadm.go:310] 
	I1028 04:44:38.008991    4886 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w7krdh.lvwpsl5dc8t7bk4m \
	I1028 04:44:38.009041    4886 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b1828748577e93ccb806e0aae973ddbc82f94e1a1a028b415724a35e8cf5acf 
	I1028 04:44:38.009125    4886 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 04:44:38.009143    4886 cni.go:84] Creating CNI manager for ""
	I1028 04:44:38.009155    4886 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:44:38.015424    4886 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 04:44:38.023485    4886 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 04:44:38.026806    4886 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 04:44:38.031686    4886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 04:44:38.031758    4886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 04:44:38.031772    4886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-687000 minikube.k8s.io/updated_at=2024_10_28T04_44_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=running-upgrade-687000 minikube.k8s.io/primary=true
	I1028 04:44:38.035048    4886 ops.go:34] apiserver oom_adj: -16
	I1028 04:44:38.070339    4886 kubeadm.go:1113] duration metric: took 38.618208ms to wait for elevateKubeSystemPrivileges
	I1028 04:44:38.085958    4886 kubeadm.go:394] duration metric: took 4m12.175275667s to StartCluster
	I1028 04:44:38.085976    4886 settings.go:142] acquiring lock: {Name:mkb494d4e656a3be4717ac10e07a477c00ee7ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:44:38.086089    4886 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:44:38.086466    4886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:44:38.086665    4886 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:44:38.086696    4886 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 04:44:38.086770    4886 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-687000"
	I1028 04:44:38.086778    4886 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-687000"
	W1028 04:44:38.086781    4886 addons.go:243] addon storage-provisioner should already be in state true
	I1028 04:44:38.086792    4886 host.go:66] Checking if "running-upgrade-687000" exists ...
	I1028 04:44:38.086811    4886 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-687000"
	I1028 04:44:38.086824    4886 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-687000"
	I1028 04:44:38.086835    4886 config.go:182] Loaded profile config "running-upgrade-687000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:44:38.087831    4886 kapi.go:59] client config for running-upgrade-687000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/client.key", CAFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10495e680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 04:44:38.088163    4886 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-687000"
	W1028 04:44:38.088169    4886 addons.go:243] addon default-storageclass should already be in state true
	I1028 04:44:38.088176    4886 host.go:66] Checking if "running-upgrade-687000" exists ...
	I1028 04:44:38.089478    4886 out.go:177] * Verifying Kubernetes components...
	I1028 04:44:38.089794    4886 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 04:44:38.093438    4886 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 04:44:38.093447    4886 sshutil.go:53] new ssh client: &{IP:localhost Port:56996 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/running-upgrade-687000/id_rsa Username:docker}
	I1028 04:44:38.097314    4886 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:44:38.101453    4886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:44:38.105416    4886 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 04:44:38.105423    4886 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 04:44:38.105430    4886 sshutil.go:53] new ssh client: &{IP:localhost Port:56996 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/running-upgrade-687000/id_rsa Username:docker}
	I1028 04:44:38.178463    4886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 04:44:38.184303    4886 api_server.go:52] waiting for apiserver process to appear ...
	I1028 04:44:38.184359    4886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:44:38.191285    4886 api_server.go:72] duration metric: took 104.60725ms to wait for apiserver process to appear ...
	I1028 04:44:38.191296    4886 api_server.go:88] waiting for apiserver healthz status ...
	I1028 04:44:38.191306    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:38.204103    4886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 04:44:38.257405    4886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 04:44:38.530483    4886 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 04:44:38.530500    4886 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 04:44:43.193454    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:43.193526    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:48.193961    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:48.193997    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:53.194453    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:53.194500    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:58.195048    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:58.195110    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:03.195890    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:03.195950    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:08.196931    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:08.197013    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1028 04:45:08.533027    4886 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1028 04:45:08.541299    4886 out.go:177] * Enabled addons: storage-provisioner
	I1028 04:45:08.547288    4886 addons.go:510] duration metric: took 30.460470959s for enable addons: enabled=[storage-provisioner]
	I1028 04:45:13.198627    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:13.198694    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:18.200387    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:18.200441    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:23.202770    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:23.202811    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:28.203767    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:28.203784    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:33.205950    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:33.205976    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:38.208198    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:38.208374    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:38.219557    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:45:38.219638    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:38.233808    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:45:38.233888    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:38.244747    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:45:38.244824    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:38.255067    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:45:38.255141    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:38.265326    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:45:38.265408    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:38.275772    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:45:38.275844    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:38.286474    4886 logs.go:282] 0 containers: []
	W1028 04:45:38.286485    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:38.286546    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:38.296661    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:45:38.296676    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:38.296681    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:38.333285    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:45:38.333295    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:45:38.348039    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:45:38.348053    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:45:38.365010    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:45:38.365021    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:45:38.376314    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:45:38.376325    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:45:38.388053    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:45:38.388065    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:45:38.402164    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:38.402178    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:38.406673    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:38.406680    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:38.441640    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:45:38.441653    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:45:38.453456    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:45:38.453465    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:45:38.471381    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:45:38.471392    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:45:38.490746    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:38.490755    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:38.514368    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:45:38.514376    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:41.028948    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:46.030902    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:46.031130    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:46.050272    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:45:46.050373    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:46.064530    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:45:46.064622    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:46.075509    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:45:46.075582    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:46.085653    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:45:46.085729    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:46.096255    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:45:46.096328    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:46.106760    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:45:46.106829    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:46.121717    4886 logs.go:282] 0 containers: []
	W1028 04:45:46.121735    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:46.121803    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:46.132705    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:45:46.132724    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:46.132729    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:46.169751    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:45:46.169763    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:45:46.185791    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:45:46.185804    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:46.197153    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:45:46.197167    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:45:46.209434    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:45:46.209446    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:45:46.225350    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:45:46.225366    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:45:46.239888    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:45:46.239901    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:45:46.258454    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:46.258467    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:46.263203    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:46.263211    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:46.299753    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:45:46.299765    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:45:46.313783    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:45:46.313793    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:45:46.328640    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:45:46.328653    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:45:46.340631    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:46.340643    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:48.868163    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:53.870933    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:53.871196    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:53.895918    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:45:53.896046    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:53.912704    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:45:53.912806    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:53.925918    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:45:53.926002    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:53.937006    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:45:53.937084    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:53.947141    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:45:53.947208    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:53.957445    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:45:53.957513    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:53.967642    4886 logs.go:282] 0 containers: []
	W1028 04:45:53.967654    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:53.967717    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:53.978178    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:45:53.978196    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:53.978201    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:54.014212    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:45:54.014222    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:45:54.031104    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:45:54.031114    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:45:54.046295    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:45:54.046307    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:45:54.059336    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:45:54.059348    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:45:54.073401    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:45:54.073412    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:45:54.084924    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:45:54.084936    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:45:54.102976    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:54.102988    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:54.141278    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:54.141289    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:54.146278    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:45:54.146284    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:45:54.167080    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:45:54.167091    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:45:54.178959    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:54.178970    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:54.203917    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:45:54.203928    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:56.718657    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:01.721163    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:01.721609    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:01.756222    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:01.756377    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:01.776635    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:01.776747    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:01.792316    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:01.792407    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:01.805554    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:01.805634    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:01.817602    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:01.817676    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:01.828345    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:01.828426    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:01.842389    4886 logs.go:282] 0 containers: []
	W1028 04:46:01.842401    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:01.842467    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:01.853811    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:01.853826    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:01.853832    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:01.865790    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:01.865802    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:01.877308    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:01.877319    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:01.911751    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:01.911761    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:01.926191    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:01.926202    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:01.937886    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:01.937897    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:01.949704    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:01.949719    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:01.968497    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:01.968508    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:01.992389    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:01.992400    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:02.028762    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:02.028770    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:02.033434    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:02.033442    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:02.048073    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:02.048086    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:02.062696    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:02.062707    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:04.576852    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:09.579215    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:09.579465    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:09.601604    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:09.601708    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:09.616481    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:09.616572    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:09.629183    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:09.629265    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:09.640038    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:09.640115    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:09.650925    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:09.650996    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:09.661679    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:09.661755    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:09.672460    4886 logs.go:282] 0 containers: []
	W1028 04:46:09.672472    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:09.672540    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:09.683266    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:09.683285    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:09.683293    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:09.718007    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:09.718020    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:09.732343    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:09.732357    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:09.746465    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:09.746475    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:09.757787    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:09.757798    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:09.772093    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:09.772104    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:09.783826    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:09.783839    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:09.798200    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:09.798211    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:09.835937    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:09.835947    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:09.849603    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:09.849615    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:09.868724    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:09.868742    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:09.880105    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:09.880119    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:09.905635    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:09.905643    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:12.412203    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:17.414482    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:17.414703    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:17.433520    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:17.433620    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:17.447280    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:17.447358    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:17.458046    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:17.458115    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:17.468914    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:17.468989    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:17.479562    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:17.479648    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:17.490224    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:17.490292    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:17.500962    4886 logs.go:282] 0 containers: []
	W1028 04:46:17.500974    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:17.501037    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:17.514525    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:17.514540    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:17.514546    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:17.526218    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:17.526231    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:17.563996    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:17.564004    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:17.568382    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:17.568390    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:17.582918    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:17.582930    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:17.595013    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:17.595027    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:17.607116    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:17.607126    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:17.624737    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:17.624748    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:17.663201    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:17.663215    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:17.676638    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:17.676648    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:17.690813    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:17.690826    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:17.702985    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:17.702996    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:17.714972    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:17.714984    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:20.239854    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:25.242363    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:25.242818    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:25.278966    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:25.279103    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:25.298680    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:25.298779    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:25.312623    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:25.312710    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:25.325006    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:25.325089    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:25.337789    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:25.337874    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:25.352483    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:25.352564    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:25.363298    4886 logs.go:282] 0 containers: []
	W1028 04:46:25.363315    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:25.363386    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:25.374012    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:25.374027    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:25.374033    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:25.388858    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:25.388875    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:25.404786    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:25.404797    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:25.417508    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:25.417519    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:25.433980    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:25.433990    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:25.458739    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:25.458748    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:25.477193    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:25.477203    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:25.489577    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:25.489593    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:25.501356    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:25.501366    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:25.539988    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:25.539997    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:25.545296    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:25.545305    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:25.579563    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:25.579575    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:25.591523    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:25.591535    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:28.116038    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:33.118398    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:33.118693    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:33.145080    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:33.145224    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:33.162560    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:33.162661    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:33.177196    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:33.177281    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:33.188547    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:33.188625    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:33.198745    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:33.198817    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:33.212649    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:33.212715    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:33.222657    4886 logs.go:282] 0 containers: []
	W1028 04:46:33.222670    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:33.222732    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:33.233070    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:33.233087    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:33.233092    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:33.270988    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:33.270999    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:33.305424    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:33.305436    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:33.316996    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:33.317008    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:33.328172    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:33.328184    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:33.353273    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:33.353282    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:33.365042    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:33.365053    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:33.383029    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:33.383040    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:33.387947    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:33.387956    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:33.402527    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:33.402538    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:33.416566    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:33.416581    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:33.427712    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:33.427723    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:33.441417    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:33.441432    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:35.959086    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:40.961407    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:40.961621    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:40.978974    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:40.979078    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:40.991850    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:40.991935    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:41.002628    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:41.002703    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:41.013527    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:41.013599    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:41.023980    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:41.024051    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:41.034610    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:41.034693    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:41.045224    4886 logs.go:282] 0 containers: []
	W1028 04:46:41.045237    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:41.045304    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:41.055755    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:41.055772    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:41.055777    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:41.067310    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:41.067320    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:41.081289    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:41.081302    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:41.097950    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:41.097961    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:41.121727    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:41.121734    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:41.133051    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:41.133061    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:41.169448    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:41.169462    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:41.181957    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:41.181971    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:41.196824    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:41.196836    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:41.214574    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:41.214587    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:41.231690    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:41.231703    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:41.243490    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:41.243501    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:41.280963    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:41.280973    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:43.787593    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:48.790350    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:48.790820    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:48.832641    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:48.832803    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:48.858118    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:48.858237    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:48.876745    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:48.876823    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:48.888114    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:48.888186    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:48.898442    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:48.898512    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:48.909126    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:48.909208    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:48.919667    4886 logs.go:282] 0 containers: []
	W1028 04:46:48.919680    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:48.919749    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:48.930000    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:48.930014    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:48.930020    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:48.944494    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:48.944509    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:48.958240    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:48.958252    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:48.977473    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:48.977487    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:48.989215    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:48.989230    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:49.001804    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:49.001815    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:49.025336    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:49.025346    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:49.037284    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:49.037294    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:49.075262    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:49.075270    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:49.113441    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:49.113453    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:49.132044    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:49.132055    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:49.154728    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:49.154739    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:49.166448    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:49.166459    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:51.673150    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:56.675511    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:56.675749    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:56.697302    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:56.697419    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:56.712793    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:56.712872    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:56.728968    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:46:56.729055    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:56.739657    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:56.739741    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:56.750059    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:56.750135    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:56.760764    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:56.760836    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:56.771003    4886 logs.go:282] 0 containers: []
	W1028 04:46:56.771015    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:56.771078    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:56.781783    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:56.781804    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:56.781809    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:56.797194    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:56.797206    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:56.811403    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:46:56.811418    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:46:56.823060    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:56.823069    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:56.834738    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:56.834749    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:56.846750    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:56.846762    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:56.864155    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:56.864166    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:56.935634    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:46:56.935648    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:46:56.948665    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:56.948676    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:56.968465    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:56.968475    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:56.980153    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:56.980168    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:57.005069    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:57.005077    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:57.042566    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:57.042577    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:57.047069    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:57.047076    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:57.058636    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:57.058648    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:59.579070    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:04.579831    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:04.579955    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:04.590717    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:04.590792    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:04.601654    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:04.601731    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:04.612764    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:04.612851    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:04.623306    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:04.623374    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:04.633723    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:04.633793    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:04.644110    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:04.644184    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:04.654226    4886 logs.go:282] 0 containers: []
	W1028 04:47:04.654244    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:04.654310    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:04.665208    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:04.665224    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:04.665230    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:04.670267    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:04.670275    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:04.716981    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:04.716995    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:04.729670    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:04.729682    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:04.748742    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:04.748755    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:04.760305    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:04.760315    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:04.785369    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:04.785379    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:04.799447    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:04.799460    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:04.814063    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:04.814076    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:04.832081    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:04.832090    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:04.868257    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:04.868265    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:04.886308    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:04.886318    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:04.901737    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:04.901752    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:04.914032    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:04.914044    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:04.928269    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:04.928279    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:07.442034    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:12.444456    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:12.444715    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:12.468084    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:12.468209    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:12.484926    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:12.485022    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:12.498376    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:12.498458    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:12.509921    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:12.509991    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:12.521593    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:12.521671    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:12.532050    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:12.532128    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:12.541666    4886 logs.go:282] 0 containers: []
	W1028 04:47:12.541681    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:12.541739    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:12.552901    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:12.552922    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:12.552927    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:12.567350    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:12.567360    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:12.579558    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:12.579569    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:12.604087    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:12.604103    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:12.615612    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:12.615622    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:12.627276    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:12.627287    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:12.662842    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:12.662856    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:12.667245    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:12.667252    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:12.701945    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:12.701956    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:12.714174    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:12.714186    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:12.729105    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:12.729115    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:12.741340    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:12.741353    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:12.755876    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:12.755887    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:12.766951    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:12.766961    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:12.777955    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:12.777966    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:15.298037    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:20.300855    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:20.301327    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:20.336198    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:20.336347    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:20.356470    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:20.356567    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:20.371429    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:20.371515    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:20.384061    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:20.384144    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:20.395165    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:20.395240    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:20.406413    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:20.406498    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:20.419950    4886 logs.go:282] 0 containers: []
	W1028 04:47:20.419963    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:20.420035    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:20.431232    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:20.431251    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:20.431256    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:20.447333    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:20.447348    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:20.460147    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:20.460158    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:20.475178    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:20.475190    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:20.498183    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:20.498194    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:20.511205    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:20.511217    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:20.525009    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:20.525021    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:20.530392    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:20.530399    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:20.568060    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:20.568069    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:20.582498    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:20.582510    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:20.599273    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:20.599283    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:20.621439    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:20.621449    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:20.633209    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:20.633224    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:20.669776    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:20.669787    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:20.682079    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:20.682091    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:23.210144    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:28.212526    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:28.212720    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:28.229616    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:28.229712    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:28.242182    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:28.242263    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:28.253813    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:28.253899    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:28.265031    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:28.265105    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:28.275173    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:28.275251    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:28.285770    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:28.285845    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:28.296073    4886 logs.go:282] 0 containers: []
	W1028 04:47:28.296086    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:28.296148    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:28.306268    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:28.306286    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:28.306291    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:28.317839    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:28.317855    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:28.354885    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:28.354898    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:28.369581    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:28.369593    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:28.380901    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:28.380914    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:28.419441    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:28.419450    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:28.430986    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:28.431001    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:28.448370    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:28.448386    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:28.462538    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:28.462553    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:28.473819    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:28.473830    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:28.486228    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:28.486243    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:28.504390    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:28.504400    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:28.528087    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:28.528095    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:28.532078    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:28.532085    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:28.543943    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:28.543960    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:31.056897    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:36.057617    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:36.057837    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:36.075346    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:36.075437    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:36.089105    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:36.089192    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:36.101796    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:36.101887    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:36.112217    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:36.112291    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:36.122740    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:36.122818    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:36.133489    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:36.133565    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:36.143406    4886 logs.go:282] 0 containers: []
	W1028 04:47:36.143418    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:36.143486    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:36.164952    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:36.164971    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:36.164978    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:36.190069    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:36.190079    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:36.204619    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:36.204630    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:36.216613    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:36.216625    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:36.231144    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:36.231156    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:36.243088    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:36.243099    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:36.261535    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:36.261546    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:36.272882    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:36.272894    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:36.311122    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:36.311130    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:36.330337    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:36.330351    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:36.354278    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:36.354287    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:36.366002    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:36.366014    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:36.370287    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:36.370297    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:36.405543    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:36.405557    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:36.419890    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:36.419938    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:38.940192    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:43.942470    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:43.942706    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:43.965506    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:43.965635    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:43.981147    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:43.981247    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:43.993943    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:43.994028    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:44.005441    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:44.005512    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:44.023759    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:44.023830    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:44.033937    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:44.034002    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:44.044811    4886 logs.go:282] 0 containers: []
	W1028 04:47:44.044824    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:44.044890    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:44.056037    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:44.056059    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:44.056068    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:44.067582    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:44.067594    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:44.079311    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:44.079322    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:44.092359    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:44.092371    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:44.115497    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:44.115505    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:44.134182    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:44.134193    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:44.145695    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:44.145705    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:44.182836    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:44.182851    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:44.197731    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:44.197742    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:44.210134    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:44.210145    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:44.215047    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:44.215054    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:44.250490    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:44.250503    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:44.262612    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:44.262623    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:44.276947    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:44.276956    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:44.291779    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:44.291791    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:46.812151    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:51.814666    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:51.815140    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:51.848790    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:51.848949    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:51.868984    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:51.869085    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:51.894828    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:51.894908    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:51.905714    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:51.905792    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:51.916319    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:51.916397    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:51.929937    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:51.930018    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:51.941199    4886 logs.go:282] 0 containers: []
	W1028 04:47:51.941211    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:51.941278    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:51.956568    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:51.956586    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:51.956592    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:51.974300    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:51.974311    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:51.978859    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:51.978869    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:51.993177    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:51.993190    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:52.009030    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:52.009043    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:52.025254    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:52.025267    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:52.039536    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:52.039549    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:52.053249    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:52.053261    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:52.093905    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:52.093919    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:52.106324    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:52.106335    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:52.119191    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:52.119201    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:52.155131    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:52.155142    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:52.167972    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:52.167984    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:52.180026    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:52.180036    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:52.205362    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:52.205375    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:54.719704    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:59.722533    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:59.722980    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:59.761492    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:59.761644    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:59.784964    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:59.785087    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:59.803343    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:59.803437    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:59.814993    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:59.815068    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:59.836192    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:59.836263    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:59.847117    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:59.847197    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:59.860979    4886 logs.go:282] 0 containers: []
	W1028 04:47:59.860990    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:59.861055    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:59.871627    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:59.871650    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:59.871656    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:59.886140    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:59.886153    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:59.898121    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:59.898135    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:59.916041    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:59.916055    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:59.930788    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:59.930798    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:59.942913    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:59.942922    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:59.963961    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:59.963973    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:59.976332    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:59.976343    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:00.002364    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:00.002384    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:00.015624    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:00.015643    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:00.020706    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:00.020719    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:00.033310    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:00.033326    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:00.047531    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:00.047541    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:00.059194    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:00.059205    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:00.096742    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:00.096751    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:02.634232    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:07.636565    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:07.636682    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:48:07.647961    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:48:07.648043    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:48:07.660252    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:48:07.660334    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:48:07.672146    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:48:07.672229    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:48:07.683411    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:48:07.683492    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:48:07.699344    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:48:07.699425    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:48:07.710601    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:48:07.710686    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:48:07.722973    4886 logs.go:282] 0 containers: []
	W1028 04:48:07.722986    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:48:07.723055    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:48:07.734978    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:48:07.734998    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:07.735004    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:07.774086    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:07.774105    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:07.779192    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:07.779199    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:07.799623    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:48:07.799641    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:48:07.811878    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:07.811895    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:07.852653    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:48:07.852666    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:48:07.868062    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:48:07.868078    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:48:07.882824    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:48:07.882836    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:48:07.895939    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:48:07.895951    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:48:07.908314    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:48:07.908325    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:48:07.921897    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:48:07.921914    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:07.946392    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:07.946407    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:07.958785    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:48:07.958802    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:48:07.978142    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:07.978154    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:07.990300    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:07.990313    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:10.508639    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:15.510905    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:15.511173    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:48:15.533097    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:48:15.533210    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:48:15.547345    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:48:15.547429    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:48:15.559716    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:48:15.559796    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:48:15.570502    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:48:15.570581    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:48:15.581262    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:48:15.581345    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:48:15.591606    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:48:15.591691    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:48:15.601787    4886 logs.go:282] 0 containers: []
	W1028 04:48:15.601799    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:48:15.601860    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:48:15.612178    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:48:15.612196    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:15.612201    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:15.624177    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:48:15.624191    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:15.648431    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:15.648438    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:15.652626    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:48:15.652634    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:48:15.671541    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:48:15.671554    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:48:15.685621    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:15.685634    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:15.697414    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:48:15.697425    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:48:15.708955    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:48:15.708968    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:48:15.724630    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:48:15.724644    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:48:15.736234    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:15.736247    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:15.772618    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:48:15.772626    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:48:15.787243    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:48:15.787253    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:48:15.801787    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:15.801798    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:15.813571    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:15.813584    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:15.849879    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:15.849892    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:18.366927    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:23.369017    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:23.369171    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:48:23.381240    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:48:23.381341    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:48:23.393429    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:48:23.393513    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:48:23.405640    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:48:23.405726    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:48:23.417111    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:48:23.417194    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:48:23.427427    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:48:23.427510    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:48:23.438984    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:48:23.439065    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:48:23.450759    4886 logs.go:282] 0 containers: []
	W1028 04:48:23.450773    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:48:23.450841    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:48:23.461494    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:48:23.461512    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:48:23.461518    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:48:23.476251    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:23.476268    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:23.492341    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:48:23.492356    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:48:23.506122    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:23.506133    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:23.511368    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:48:23.511376    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:48:23.525181    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:23.525197    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:23.537761    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:23.537775    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:23.549709    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:23.549725    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:23.592075    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:23.592089    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:23.633007    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:48:23.633019    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:48:23.644655    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:48:23.644666    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:48:23.661963    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:23.661973    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:23.673411    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:48:23.673424    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:23.697685    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:48:23.697696    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:48:23.709110    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:48:23.709124    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:48:26.229289    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:31.231520    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:31.231626    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:48:31.245055    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:48:31.245145    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:48:31.259741    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:48:31.259814    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:48:31.276275    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:48:31.276367    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:48:31.287616    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:48:31.287684    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:48:31.298219    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:48:31.298287    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:48:31.308688    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:48:31.308765    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:48:31.318683    4886 logs.go:282] 0 containers: []
	W1028 04:48:31.318693    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:48:31.318753    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:48:31.333264    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:48:31.333284    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:31.333289    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:31.347622    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:31.347633    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:31.384046    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:31.384056    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:31.388713    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:48:31.388721    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:48:31.402890    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:48:31.402901    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:48:31.414793    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:48:31.414807    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:48:31.427099    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:48:31.427114    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:31.451112    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:48:31.451119    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:48:31.463238    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:48:31.463250    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:48:31.481556    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:31.481567    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:31.497990    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:31.498006    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:31.536212    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:48:31.536228    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:48:31.557579    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:31.557596    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:31.568950    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:48:31.568962    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:48:31.580628    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:31.580645    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:34.094543    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:39.096833    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:39.102449    4886 out.go:201] 
	W1028 04:48:39.106414    4886 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1028 04:48:39.106420    4886 out.go:270] * 
	* 
	W1028 04:48:39.106902    4886 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:48:39.118373    4886 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-687000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-28 04:48:39.215084 -0700 PDT m=+4128.001033209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-687000 -n running-upgrade-687000
E1028 04:48:42.912091    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-687000 -n running-upgrade-687000: exit status 2 (15.669356333s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-687000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-446000          | force-systemd-flag-446000 | jenkins | v1.34.0 | 28 Oct 24 04:38 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-759000              | force-systemd-env-759000  | jenkins | v1.34.0 | 28 Oct 24 04:38 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-759000           | force-systemd-env-759000  | jenkins | v1.34.0 | 28 Oct 24 04:38 PDT | 28 Oct 24 04:38 PDT |
	| start   | -p docker-flags-375000                | docker-flags-375000       | jenkins | v1.34.0 | 28 Oct 24 04:38 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-446000             | force-systemd-flag-446000 | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-446000          | force-systemd-flag-446000 | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT | 28 Oct 24 04:39 PDT |
	| start   | -p cert-expiration-899000             | cert-expiration-899000    | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-375000 ssh               | docker-flags-375000       | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-375000 ssh               | docker-flags-375000       | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-375000                | docker-flags-375000       | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT | 28 Oct 24 04:39 PDT |
	| start   | -p cert-options-021000                | cert-options-021000       | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-021000 ssh               | cert-options-021000       | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-021000 -- sudo        | cert-options-021000       | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-021000                | cert-options-021000       | jenkins | v1.34.0 | 28 Oct 24 04:39 PDT | 28 Oct 24 04:39 PDT |
	| start   | -p running-upgrade-687000             | minikube                  | jenkins | v1.26.0 | 28 Oct 24 04:39 PDT | 28 Oct 24 04:40 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-687000             | running-upgrade-687000    | jenkins | v1.34.0 | 28 Oct 24 04:40 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-899000             | cert-expiration-899000    | jenkins | v1.34.0 | 28 Oct 24 04:42 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-899000             | cert-expiration-899000    | jenkins | v1.34.0 | 28 Oct 24 04:42 PDT | 28 Oct 24 04:42 PDT |
	| start   | -p kubernetes-upgrade-628000          | kubernetes-upgrade-628000 | jenkins | v1.34.0 | 28 Oct 24 04:42 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-628000          | kubernetes-upgrade-628000 | jenkins | v1.34.0 | 28 Oct 24 04:42 PDT | 28 Oct 24 04:42 PDT |
	| start   | -p kubernetes-upgrade-628000          | kubernetes-upgrade-628000 | jenkins | v1.34.0 | 28 Oct 24 04:42 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-628000          | kubernetes-upgrade-628000 | jenkins | v1.34.0 | 28 Oct 24 04:42 PDT | 28 Oct 24 04:42 PDT |
	| start   | -p stopped-upgrade-714000             | minikube                  | jenkins | v1.26.0 | 28 Oct 24 04:42 PDT | 28 Oct 24 04:43 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-714000 stop           | minikube                  | jenkins | v1.26.0 | 28 Oct 24 04:43 PDT | 28 Oct 24 04:43 PDT |
	| start   | -p stopped-upgrade-714000             | stopped-upgrade-714000    | jenkins | v1.34.0 | 28 Oct 24 04:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 04:43:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 04:43:30.235542    5010 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:43:30.235738    5010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:43:30.235742    5010 out.go:358] Setting ErrFile to fd 2...
	I1028 04:43:30.235745    5010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:43:30.235910    5010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:43:30.237297    5010 out.go:352] Setting JSON to false
	I1028 04:43:30.258072    5010 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4381,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:43:30.258160    5010 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:43:30.263508    5010 out.go:177] * [stopped-upgrade-714000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:43:30.270411    5010 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:43:30.270447    5010 notify.go:220] Checking for updates...
	I1028 04:43:30.279422    5010 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:43:30.282414    5010 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:43:30.285410    5010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:43:30.292411    5010 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:43:30.296421    5010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:43:30.299869    5010 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:43:30.304413    5010 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 04:43:30.307466    5010 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:43:30.311398    5010 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:43:30.318437    5010 start.go:297] selected driver: qemu2
	I1028 04:43:30.318443    5010 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:43:30.318483    5010 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:43:30.321203    5010 cni.go:84] Creating CNI manager for ""
	I1028 04:43:30.321235    5010 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:43:30.321264    5010 start.go:340] cluster config:
	{Name:stopped-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:43:30.321317    5010 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:43:30.329451    5010 out.go:177] * Starting "stopped-upgrade-714000" primary control-plane node in "stopped-upgrade-714000" cluster
	I1028 04:43:30.332389    5010 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 04:43:30.332401    5010 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1028 04:43:30.332407    5010 cache.go:56] Caching tarball of preloaded images
	I1028 04:43:30.332456    5010 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:43:30.332462    5010 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1028 04:43:30.332505    5010 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/config.json ...
	I1028 04:43:30.332873    5010 start.go:360] acquireMachinesLock for stopped-upgrade-714000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:43:30.332908    5010 start.go:364] duration metric: took 28.417µs to acquireMachinesLock for "stopped-upgrade-714000"
	I1028 04:43:30.332917    5010 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:43:30.332922    5010 fix.go:54] fixHost starting: 
	I1028 04:43:30.333028    5010 fix.go:112] recreateIfNeeded on stopped-upgrade-714000: state=Stopped err=<nil>
	W1028 04:43:30.333036    5010 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:43:30.341464    5010 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-714000" ...
	I1028 04:43:29.855462    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:30.345419    5010 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:43:30.345487    5010 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/qemu.pid -nic user,model=virtio,hostfwd=tcp::57238-:22,hostfwd=tcp::57239-:2376,hostname=stopped-upgrade-714000 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/disk.qcow2
	I1028 04:43:30.392690    5010 main.go:141] libmachine: STDOUT: 
	I1028 04:43:30.392719    5010 main.go:141] libmachine: STDERR: 
	I1028 04:43:30.392727    5010 main.go:141] libmachine: Waiting for VM to start (ssh -p 57238 docker@127.0.0.1)...
	I1028 04:43:34.858151    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:34.858987    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:34.900944    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:34.901111    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:34.923742    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:34.923882    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:34.939915    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:34.940006    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:34.952131    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:34.952214    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:34.967451    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:34.967532    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:34.978276    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:34.978362    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:34.989894    4886 logs.go:282] 0 containers: []
	W1028 04:43:34.989912    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:34.989979    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:35.000194    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:35.000222    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:35.000228    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:35.012272    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:35.012283    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:35.049844    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:35.049858    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:35.064389    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:35.064405    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:35.089151    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:35.089165    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:35.104138    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:35.104151    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:35.122723    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:35.122735    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:35.135338    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:35.135355    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:35.146890    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:35.146901    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:35.158780    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:35.158793    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:35.170467    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:35.170480    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:35.187841    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:35.187853    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:35.200504    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:35.200517    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:35.218787    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:35.218801    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:35.242977    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:35.242988    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:35.283624    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:35.283634    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:35.287910    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:35.287915    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:37.804040    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:42.806232    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:42.806366    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:42.824667    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:42.824754    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:42.836642    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:42.836725    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:42.848774    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:42.848860    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:42.859508    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:42.859588    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:42.872862    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:42.872948    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:42.883162    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:42.883247    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:42.894082    4886 logs.go:282] 0 containers: []
	W1028 04:43:42.894096    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:42.894163    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:42.904405    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:42.904423    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:42.904428    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:42.945444    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:42.945455    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:42.970185    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:42.970202    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:42.981707    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:42.981722    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:43.006302    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:43.006312    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:43.020285    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:43.020302    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:43.031947    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:43.031957    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:43.043150    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:43.043161    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:43.054909    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:43.054919    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:43.067168    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:43.067178    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:43.102846    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:43.102856    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:43.121361    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:43.121369    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:43.136175    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:43.136185    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:43.147989    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:43.147998    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:43.152384    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:43.152389    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:43.167661    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:43.167671    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:43.179441    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:43.179450    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:45.699247    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:50.865436    5010 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/config.json ...
	I1028 04:43:50.865698    5010 machine.go:93] provisionDockerMachine start ...
	I1028 04:43:50.865763    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:50.865917    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:50.865922    5010 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 04:43:50.935946    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 04:43:50.935965    5010 buildroot.go:166] provisioning hostname "stopped-upgrade-714000"
	I1028 04:43:50.936046    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:50.936171    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:50.936180    5010 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-714000 && echo "stopped-upgrade-714000" | sudo tee /etc/hostname
	I1028 04:43:51.008474    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-714000
	
	I1028 04:43:51.008556    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.008672    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.008681    5010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-714000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-714000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-714000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 04:43:51.076482    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 04:43:51.076496    5010 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19876-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19876-1087/.minikube}
	I1028 04:43:51.076505    5010 buildroot.go:174] setting up certificates
	I1028 04:43:51.076509    5010 provision.go:84] configureAuth start
	I1028 04:43:51.076520    5010 provision.go:143] copyHostCerts
	I1028 04:43:51.076604    5010 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem, removing ...
	I1028 04:43:51.076612    5010 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem
	I1028 04:43:51.076702    5010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem (1078 bytes)
	I1028 04:43:51.076889    5010 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem, removing ...
	I1028 04:43:51.076894    5010 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem
	I1028 04:43:51.076935    5010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem (1123 bytes)
	I1028 04:43:51.077045    5010 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem, removing ...
	I1028 04:43:51.077049    5010 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem
	I1028 04:43:51.077088    5010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem (1679 bytes)
	I1028 04:43:51.077184    5010 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-714000 san=[127.0.0.1 localhost minikube stopped-upgrade-714000]
	I1028 04:43:51.111364    5010 provision.go:177] copyRemoteCerts
	I1028 04:43:51.111421    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 04:43:51.111430    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:43:51.148312    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 04:43:51.155757    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 04:43:51.163055    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 04:43:51.170544    5010 provision.go:87] duration metric: took 94.01925ms to configureAuth
	I1028 04:43:51.170560    5010 buildroot.go:189] setting minikube options for container-runtime
	I1028 04:43:51.170706    5010 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:43:51.170799    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.170897    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.170903    5010 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 04:43:51.237070    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 04:43:51.237079    5010 buildroot.go:70] root file system type: tmpfs
	I1028 04:43:51.237135    5010 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 04:43:51.237200    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.237324    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.237357    5010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 04:43:51.304908    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 04:43:51.304969    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.305065    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.305072    5010 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 04:43:51.673807    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 04:43:51.673821    5010 machine.go:96] duration metric: took 808.113625ms to provisionDockerMachine
	I1028 04:43:51.673829    5010 start.go:293] postStartSetup for "stopped-upgrade-714000" (driver="qemu2")
	I1028 04:43:51.673835    5010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 04:43:51.673912    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 04:43:51.673923    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:43:51.710754    5010 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 04:43:51.711928    5010 info.go:137] Remote host: Buildroot 2021.02.12
	I1028 04:43:51.711940    5010 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/addons for local assets ...
	I1028 04:43:51.712018    5010 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/files for local assets ...
	I1028 04:43:51.712117    5010 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem -> 15982.pem in /etc/ssl/certs
	I1028 04:43:51.712222    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 04:43:51.715131    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /etc/ssl/certs/15982.pem (1708 bytes)
	I1028 04:43:51.722285    5010 start.go:296] duration metric: took 48.451083ms for postStartSetup
	I1028 04:43:51.722300    5010 fix.go:56] duration metric: took 21.389298167s for fixHost
	I1028 04:43:51.722341    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.722453    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.722458    5010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 04:43:51.788709    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730115831.564287838
	
	I1028 04:43:51.788738    5010 fix.go:216] guest clock: 1730115831.564287838
	I1028 04:43:51.788751    5010 fix.go:229] Guest: 2024-10-28 04:43:51.564287838 -0700 PDT Remote: 2024-10-28 04:43:51.722301 -0700 PDT m=+21.519381918 (delta=-158.013162ms)
	I1028 04:43:51.788761    5010 fix.go:200] guest clock delta is within tolerance: -158.013162ms
	I1028 04:43:51.788764    5010 start.go:83] releasing machines lock for "stopped-upgrade-714000", held for 21.455770375s
	I1028 04:43:51.788837    5010 ssh_runner.go:195] Run: cat /version.json
	I1028 04:43:51.788847    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:43:51.788837    5010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 04:43:51.788875    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	W1028 04:43:51.789349    5010 sshutil.go:64] dial failure (will retry): dial tcp [::1]:57238: connect: connection refused
	I1028 04:43:51.789368    5010 retry.go:31] will retry after 218.453401ms: dial tcp [::1]:57238: connect: connection refused
	W1028 04:43:52.044384    5010 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1028 04:43:52.044438    5010 ssh_runner.go:195] Run: systemctl --version
	I1028 04:43:52.046392    5010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 04:43:52.048060    5010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 04:43:52.048099    5010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1028 04:43:52.051177    5010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1028 04:43:52.056051    5010 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 04:43:52.056060    5010 start.go:495] detecting cgroup driver to use...
	I1028 04:43:52.056137    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 04:43:52.063220    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1028 04:43:52.066629    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 04:43:52.069567    5010 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 04:43:52.069598    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 04:43:52.072426    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 04:43:52.075696    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 04:43:52.078976    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 04:43:52.082313    5010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 04:43:52.085134    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 04:43:52.088115    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 04:43:52.091408    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 04:43:52.094861    5010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 04:43:52.097539    5010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 04:43:52.100201    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:52.178468    5010 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 04:43:52.189160    5010 start.go:495] detecting cgroup driver to use...
	I1028 04:43:52.189250    5010 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 04:43:52.195750    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 04:43:52.200633    5010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 04:43:52.211204    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 04:43:52.216596    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 04:43:52.221617    5010 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 04:43:52.283243    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 04:43:52.288481    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 04:43:52.293696    5010 ssh_runner.go:195] Run: which cri-dockerd
	I1028 04:43:52.294931    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 04:43:52.297614    5010 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1028 04:43:52.302609    5010 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 04:43:52.382354    5010 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 04:43:52.476000    5010 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 04:43:52.476061    5010 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 04:43:52.481643    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:52.559434    5010 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 04:43:53.721176    5010 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161720041s)
	I1028 04:43:53.721250    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 04:43:53.726074    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 04:43:53.730560    5010 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 04:43:53.806730    5010 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 04:43:53.880318    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:53.957021    5010 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 04:43:53.962746    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 04:43:53.967206    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:54.053208    5010 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 04:43:54.092580    5010 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 04:43:54.092683    5010 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 04:43:54.095700    5010 start.go:563] Will wait 60s for crictl version
	I1028 04:43:54.095764    5010 ssh_runner.go:195] Run: which crictl
	I1028 04:43:54.097219    5010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 04:43:54.112558    5010 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1028 04:43:54.112635    5010 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 04:43:54.129692    5010 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 04:43:50.702149    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:50.702732    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:50.738219    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:50.738373    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:50.759507    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:50.759598    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:50.774435    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:50.774529    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:50.790983    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:50.791068    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:50.813054    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:50.813144    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:50.837124    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:50.837214    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:50.860788    4886 logs.go:282] 0 containers: []
	W1028 04:43:50.860805    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:50.860867    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:50.871833    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:50.871853    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:50.871858    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:50.888727    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:50.888740    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:50.913998    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:50.914009    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:50.926383    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:50.926396    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:50.969516    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:50.969530    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:50.974326    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:50.974337    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:50.989320    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:50.989331    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:51.001165    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:51.001180    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:51.016724    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:51.016734    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:51.029025    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:51.029035    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:51.046109    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:51.046124    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:51.058033    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:51.058043    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:51.071775    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:51.071790    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:51.083694    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:51.083705    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:51.120028    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:51.120040    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:51.134344    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:51.134354    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:51.161105    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:51.161113    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:53.675385    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:43:54.148203    5010 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1028 04:43:54.148356    5010 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1028 04:43:54.149608    5010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 04:43:54.152956    5010 kubeadm.go:883] updating cluster {Name:stopped-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1028 04:43:54.153001    5010 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 04:43:54.153049    5010 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 04:43:54.163242    5010 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 04:43:54.163254    5010 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 04:43:54.163320    5010 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 04:43:54.166876    5010 ssh_runner.go:195] Run: which lz4
	I1028 04:43:54.168155    5010 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 04:43:54.169447    5010 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 04:43:54.169457    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1028 04:43:55.127782    5010 docker.go:653] duration metric: took 959.660875ms to copy over tarball
	I1028 04:43:55.127856    5010 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 04:43:58.676414    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:43:58.676548    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:43:58.689976    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:43:58.690047    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:43:58.704979    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:43:58.705053    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:43:58.716179    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:43:58.716245    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:43:58.727375    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:43:58.727441    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:43:58.739484    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:43:58.739556    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:43:58.750208    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:43:58.750282    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:43:58.760682    4886 logs.go:282] 0 containers: []
	W1028 04:43:58.760693    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:43:58.760758    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:43:58.771703    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:43:58.771720    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:43:58.771726    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:43:58.784532    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:43:58.784544    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:43:58.802656    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:43:58.802671    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:43:58.818807    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:43:58.818820    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:43:58.823800    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:43:58.823807    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:43:58.837682    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:43:58.837692    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:43:58.852391    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:43:58.852405    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:43:58.868715    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:43:58.868724    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:43:58.881921    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:43:58.881932    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:43:58.895014    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:43:58.895026    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:43:58.920233    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:43:58.920243    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:43:58.947817    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:43:58.947831    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:43:58.992315    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:43:58.992331    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:43:59.031176    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:43:59.031189    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:43:59.048766    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:43:59.048780    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:43:59.062628    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:43:59.062638    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:43:59.076330    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:43:59.076343    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:43:56.315986    5010 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.18810975s)
	I1028 04:43:56.316006    5010 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 04:43:56.332487    5010 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 04:43:56.335828    5010 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1028 04:43:56.340883    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:56.424411    5010 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 04:43:58.069300    5010 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.644865584s)
	I1028 04:43:58.069396    5010 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 04:43:58.083100    5010 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 04:43:58.083112    5010 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 04:43:58.083117    5010 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 04:43:58.089165    5010 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:58.090720    5010 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.092722    5010 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.094795    5010 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:58.100117    5010 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.100126    5010 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.100379    5010 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.101246    5010 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.102035    5010 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.102258    5010 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 04:43:58.103405    5010 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.103416    5010 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.103733    5010 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.103821    5010 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.104308    5010 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 04:43:58.105394    5010 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.617351    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.628486    5010 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1028 04:43:58.628526    5010 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.628583    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.647808    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1028 04:43:58.666301    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.676850    5010 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1028 04:43:58.676876    5010 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.676950    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.679106    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.689968    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1028 04:43:58.700374    5010 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1028 04:43:58.700398    5010 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.700452    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.712308    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1028 04:43:58.765176    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.777151    5010 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1028 04:43:58.777173    5010 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.777234    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.787541    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1028 04:43:58.793484    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.808402    5010 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1028 04:43:58.808431    5010 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.808502    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.819585    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1028 04:43:58.841669    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 04:43:58.856486    5010 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1028 04:43:58.856509    5010 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1028 04:43:58.856576    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1028 04:43:58.867524    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1028 04:43:58.867661    5010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 04:43:58.870147    5010 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1028 04:43:58.870166    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1028 04:43:58.879824    5010 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 04:43:58.879837    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1028 04:43:58.907529    5010 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1028 04:43:58.927969    5010 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1028 04:43:58.928131    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.946047    5010 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1028 04:43:58.946066    5010 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.946131    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.956893    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 04:43:58.957041    5010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 04:43:58.958729    5010 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1028 04:43:58.958749    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W1028 04:43:58.996987    5010 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1028 04:43:58.997118    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:59.005875    5010 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 04:43:59.005889    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1028 04:43:59.014612    5010 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1028 04:43:59.014636    5010 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:59.014701    5010 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:59.055590    5010 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 04:43:59.055711    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 04:43:59.055898    5010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 04:43:59.057572    5010 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1028 04:43:59.057587    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1028 04:43:59.091855    5010 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 04:43:59.091878    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1028 04:43:59.355036    5010 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 04:43:59.355075    5010 cache_images.go:92] duration metric: took 1.271945833s to LoadCachedImages
	W1028 04:43:59.355118    5010 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1028 04:43:59.355123    5010 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1028 04:43:59.355184    5010 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-714000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 04:43:59.355273    5010 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 04:43:59.372658    5010 cni.go:84] Creating CNI manager for ""
	I1028 04:43:59.372677    5010 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:43:59.372686    5010 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 04:43:59.372695    5010 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-714000 NodeName:stopped-upgrade-714000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 04:43:59.372771    5010 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-714000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 04:43:59.372843    5010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1028 04:43:59.376108    5010 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 04:43:59.376149    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 04:43:59.378864    5010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1028 04:43:59.383853    5010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 04:43:59.388942    5010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1028 04:43:59.394283    5010 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1028 04:43:59.395496    5010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 04:43:59.399290    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:59.478039    5010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 04:43:59.486010    5010 certs.go:68] Setting up /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000 for IP: 10.0.2.15
	I1028 04:43:59.486023    5010 certs.go:194] generating shared ca certs ...
	I1028 04:43:59.486051    5010 certs.go:226] acquiring lock for ca certs: {Name:mk8f0a455373409f6ac5dde02ca67c613058d85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:43:59.486212    5010 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key
	I1028 04:43:59.486436    5010 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key
	I1028 04:43:59.486444    5010 certs.go:256] generating profile certs ...
	I1028 04:43:59.486626    5010 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.key
	I1028 04:43:59.486642    5010 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key.c6767b88
	I1028 04:43:59.486654    5010 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt.c6767b88 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1028 04:43:59.605686    5010 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt.c6767b88 ...
	I1028 04:43:59.605702    5010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt.c6767b88: {Name:mkf90a32438488277276118ea1523e9c870be5f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:43:59.605963    5010 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key.c6767b88 ...
	I1028 04:43:59.605968    5010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key.c6767b88: {Name:mk2533bda2712187e273c8edda27e29f50a220f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:43:59.606121    5010 certs.go:381] copying /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt.c6767b88 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt
	I1028 04:43:59.606236    5010 certs.go:385] copying /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key.c6767b88 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key
	I1028 04:43:59.606484    5010 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/proxy-client.key
	I1028 04:43:59.606629    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem (1338 bytes)
	W1028 04:43:59.606787    5010 certs.go:480] ignoring /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598_empty.pem, impossibly tiny 0 bytes
	I1028 04:43:59.606793    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 04:43:59.606816    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem (1078 bytes)
	I1028 04:43:59.606835    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem (1123 bytes)
	I1028 04:43:59.606853    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem (1679 bytes)
	I1028 04:43:59.606890    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem (1708 bytes)
	I1028 04:43:59.607211    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 04:43:59.614671    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 04:43:59.621476    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 04:43:59.628082    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 04:43:59.635089    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 04:43:59.642192    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 04:43:59.648684    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 04:43:59.655586    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 04:43:59.662462    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /usr/share/ca-certificates/15982.pem (1708 bytes)
	I1028 04:43:59.668560    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 04:43:59.675729    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem --> /usr/share/ca-certificates/1598.pem (1338 bytes)
	I1028 04:43:59.682796    5010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 04:43:59.688021    5010 ssh_runner.go:195] Run: openssl version
	I1028 04:43:59.690026    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1598.pem && ln -fs /usr/share/ca-certificates/1598.pem /etc/ssl/certs/1598.pem"
	I1028 04:43:59.692839    5010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1598.pem
	I1028 04:43:59.694106    5010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:47 /usr/share/ca-certificates/1598.pem
	I1028 04:43:59.694137    5010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1598.pem
	I1028 04:43:59.695695    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1598.pem /etc/ssl/certs/51391683.0"
	I1028 04:43:59.699144    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15982.pem && ln -fs /usr/share/ca-certificates/15982.pem /etc/ssl/certs/15982.pem"
	I1028 04:43:59.702249    5010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15982.pem
	I1028 04:43:59.703583    5010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:47 /usr/share/ca-certificates/15982.pem
	I1028 04:43:59.703610    5010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15982.pem
	I1028 04:43:59.705473    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15982.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 04:43:59.708258    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 04:43:59.711489    5010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:43:59.712781    5010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:40 /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:43:59.712801    5010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:43:59.714460    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 04:43:59.717216    5010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 04:43:59.718542    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 04:43:59.720889    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 04:43:59.722651    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 04:43:59.724672    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 04:43:59.726521    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 04:43:59.728340    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 04:43:59.730303    5010 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:43:59.730376    5010 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 04:43:59.740331    5010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 04:43:59.743659    5010 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 04:43:59.743664    5010 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 04:43:59.743704    5010 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 04:43:59.746478    5010 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 04:43:59.746776    5010 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-714000" does not appear in /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:43:59.746873    5010 kubeconfig.go:62] /Users/jenkins/minikube-integration/19876-1087/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-714000" cluster setting kubeconfig missing "stopped-upgrade-714000" context setting]
	I1028 04:43:59.747062    5010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:43:59.747484    5010 kapi.go:59] client config for stopped-upgrade-714000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.key", CAFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102d96680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 04:43:59.747934    5010 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 04:43:59.750735    5010 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-714000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1028 04:43:59.750741    5010 kubeadm.go:1160] stopping kube-system containers ...
	I1028 04:43:59.750787    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 04:43:59.761284    5010 docker.go:483] Stopping containers: [845363640e9e 4c70160b1032 5446ff2ad4cf 4deb81f71238 20726be67192 a3d1fe7e80ae cc397994f5aa a160fc9ffecb]
	I1028 04:43:59.761352    5010 ssh_runner.go:195] Run: docker stop 845363640e9e 4c70160b1032 5446ff2ad4cf 4deb81f71238 20726be67192 a3d1fe7e80ae cc397994f5aa a160fc9ffecb
	I1028 04:43:59.771777    5010 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 04:43:59.777823    5010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 04:43:59.780513    5010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 04:43:59.780519    5010 kubeadm.go:157] found existing configuration files:
	
	I1028 04:43:59.780547    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/admin.conf
	I1028 04:43:59.783416    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 04:43:59.783446    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 04:43:59.786617    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/kubelet.conf
	I1028 04:43:59.789351    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 04:43:59.789382    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 04:43:59.792075    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/controller-manager.conf
	I1028 04:43:59.795088    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 04:43:59.795116    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 04:43:59.798024    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/scheduler.conf
	I1028 04:43:59.800365    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 04:43:59.800397    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 04:43:59.803301    5010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 04:43:59.806334    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:43:59.827761    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:44:00.200704    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:44:01.597388    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:00.336971    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:44:00.362378    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:44:00.385602    5010 api_server.go:52] waiting for apiserver process to appear ...
	I1028 04:44:00.385695    5010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:44:00.888088    5010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:44:01.387839    5010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:44:01.392194    5010 api_server.go:72] duration metric: took 1.006588875s to wait for apiserver process to appear ...
	I1028 04:44:01.392204    5010 api_server.go:88] waiting for apiserver healthz status ...
	I1028 04:44:01.392219    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:06.599742    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:06.600236    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:44:06.640827    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:44:06.640989    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:44:06.661859    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:44:06.661972    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:44:06.677355    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:44:06.677447    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:44:06.690159    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:44:06.690243    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:44:06.701162    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:44:06.701237    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:44:06.712359    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:44:06.712438    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:44:06.723873    4886 logs.go:282] 0 containers: []
	W1028 04:44:06.723888    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:44:06.723958    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:44:06.735483    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:44:06.735500    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:44:06.735505    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:44:06.775755    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:44:06.775775    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:44:06.793439    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:44:06.793449    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:44:06.808442    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:44:06.808452    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:44:06.829219    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:44:06.829231    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:44:06.863484    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:44:06.863494    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:44:06.877491    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:44:06.877507    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:44:06.889103    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:44:06.889118    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:44:06.900676    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:44:06.900687    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:44:06.912739    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:44:06.912750    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:44:06.928742    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:44:06.928757    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:44:06.964756    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:44:06.964766    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:44:06.980271    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:44:06.980283    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:44:06.992365    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:44:06.992377    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:44:07.015920    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:44:07.015944    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:44:07.020397    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:44:07.020406    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:44:07.032705    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:44:07.032720    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:44:06.394425    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:06.394587    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:09.544086    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:11.395509    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:11.395548    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:14.544451    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:14.544572    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:44:14.557430    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:44:14.557520    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:44:14.567961    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:44:14.568042    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:44:14.578887    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:44:14.578957    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:44:14.589562    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:44:14.589645    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:44:14.600097    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:44:14.600172    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:44:14.610980    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:44:14.611059    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:44:14.621548    4886 logs.go:282] 0 containers: []
	W1028 04:44:14.621560    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:44:14.621624    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:44:14.632670    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:44:14.632687    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:44:14.632694    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:44:14.650135    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:44:14.650146    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:44:14.662769    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:44:14.662780    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:44:14.675056    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:44:14.675067    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:44:14.687010    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:44:14.687020    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:44:14.726522    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:44:14.726538    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:44:14.764240    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:44:14.764252    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:44:14.793380    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:44:14.793392    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:44:14.812023    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:44:14.812034    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:44:14.827723    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:44:14.827735    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:44:14.840194    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:44:14.840207    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:44:14.852662    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:44:14.852675    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:44:14.867675    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:44:14.867689    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:44:14.879359    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:44:14.879371    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:44:14.903242    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:44:14.903250    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:44:14.907889    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:44:14.907897    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:44:14.922388    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:44:14.922399    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:44:17.436190    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:16.396191    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:16.396254    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:22.437731    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:22.437859    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:44:22.451604    4886 logs.go:282] 2 containers: [c558c2ff458f 9f9ab9b78d6b]
	I1028 04:44:22.451693    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:44:22.469372    4886 logs.go:282] 2 containers: [d2f0884fd2d4 ca8fcda7966e]
	I1028 04:44:22.469447    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:44:22.479801    4886 logs.go:282] 1 containers: [3841f491d9a9]
	I1028 04:44:22.479883    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:44:22.490349    4886 logs.go:282] 2 containers: [e098534a6b65 75a5b2c97382]
	I1028 04:44:22.490426    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:44:22.501965    4886 logs.go:282] 1 containers: [73e9ae44b0c6]
	I1028 04:44:22.502054    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:44:22.513661    4886 logs.go:282] 2 containers: [9ccb1e55871c 9e3fe090e3aa]
	I1028 04:44:22.513745    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:44:22.524459    4886 logs.go:282] 0 containers: []
	W1028 04:44:22.524477    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:44:22.524560    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:44:22.536165    4886 logs.go:282] 2 containers: [d491287a5105 614a2464b737]
	I1028 04:44:22.536185    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:44:22.536191    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:44:22.540709    4886 logs.go:123] Gathering logs for kube-apiserver [c558c2ff458f] ...
	I1028 04:44:22.540715    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c558c2ff458f"
	I1028 04:44:22.554646    4886 logs.go:123] Gathering logs for kube-apiserver [9f9ab9b78d6b] ...
	I1028 04:44:22.554660    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f9ab9b78d6b"
	I1028 04:44:22.594330    4886 logs.go:123] Gathering logs for kube-proxy [73e9ae44b0c6] ...
	I1028 04:44:22.594340    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e9ae44b0c6"
	I1028 04:44:22.614833    4886 logs.go:123] Gathering logs for etcd [ca8fcda7966e] ...
	I1028 04:44:22.614843    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8fcda7966e"
	I1028 04:44:22.628595    4886 logs.go:123] Gathering logs for coredns [3841f491d9a9] ...
	I1028 04:44:22.628606    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3841f491d9a9"
	I1028 04:44:22.640570    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:44:22.640583    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:44:22.675586    4886 logs.go:123] Gathering logs for kube-scheduler [e098534a6b65] ...
	I1028 04:44:22.675603    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098534a6b65"
	I1028 04:44:22.686990    4886 logs.go:123] Gathering logs for storage-provisioner [d491287a5105] ...
	I1028 04:44:22.687006    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d491287a5105"
	I1028 04:44:22.701482    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:44:22.701493    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:44:22.725435    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:44:22.725451    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:44:22.766185    4886 logs.go:123] Gathering logs for etcd [d2f0884fd2d4] ...
	I1028 04:44:22.766194    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f0884fd2d4"
	I1028 04:44:22.780227    4886 logs.go:123] Gathering logs for kube-scheduler [75a5b2c97382] ...
	I1028 04:44:22.780241    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a5b2c97382"
	I1028 04:44:22.795598    4886 logs.go:123] Gathering logs for kube-controller-manager [9ccb1e55871c] ...
	I1028 04:44:22.795617    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ccb1e55871c"
	I1028 04:44:22.813184    4886 logs.go:123] Gathering logs for kube-controller-manager [9e3fe090e3aa] ...
	I1028 04:44:22.813195    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e3fe090e3aa"
	I1028 04:44:22.825153    4886 logs.go:123] Gathering logs for storage-provisioner [614a2464b737] ...
	I1028 04:44:22.825168    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a2464b737"
	I1028 04:44:22.836341    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:44:22.836352    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:44:21.397275    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:21.397294    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:25.352769    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:26.398099    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:26.398175    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:30.353826    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:30.353877    4886 kubeadm.go:597] duration metric: took 4m4.429027s to restartPrimaryControlPlane
	W1028 04:44:30.353932    4886 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 04:44:30.353954    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1028 04:44:31.373522    4886 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.019552042s)
	I1028 04:44:31.373792    4886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 04:44:31.378702    4886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 04:44:31.381462    4886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 04:44:31.384045    4886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 04:44:31.384054    4886 kubeadm.go:157] found existing configuration files:
	
	I1028 04:44:31.384088    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/admin.conf
	I1028 04:44:31.387013    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 04:44:31.387043    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 04:44:31.390149    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/kubelet.conf
	I1028 04:44:31.392694    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 04:44:31.392724    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 04:44:31.395687    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/controller-manager.conf
	I1028 04:44:31.398982    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 04:44:31.399018    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 04:44:31.402180    4886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/scheduler.conf
	I1028 04:44:31.404644    4886 kubeadm.go:163] "https://control-plane.minikube.internal:57028" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57028 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 04:44:31.404676    4886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 04:44:31.407554    4886 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 04:44:31.425823    4886 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1028 04:44:31.425871    4886 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 04:44:31.476980    4886 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 04:44:31.477049    4886 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 04:44:31.477106    4886 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 04:44:31.529386    4886 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 04:44:31.532600    4886 out.go:235]   - Generating certificates and keys ...
	I1028 04:44:31.532635    4886 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 04:44:31.532669    4886 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 04:44:31.532708    4886 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 04:44:31.532742    4886 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 04:44:31.532842    4886 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 04:44:31.532870    4886 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 04:44:31.532912    4886 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 04:44:31.532943    4886 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 04:44:31.532982    4886 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 04:44:31.533023    4886 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 04:44:31.533043    4886 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 04:44:31.533103    4886 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 04:44:31.593458    4886 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 04:44:31.699508    4886 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 04:44:31.893138    4886 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 04:44:31.965012    4886 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 04:44:31.998472    4886 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 04:44:31.998777    4886 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 04:44:31.998818    4886 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 04:44:32.069143    4886 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 04:44:32.073444    4886 out.go:235]   - Booting up control plane ...
	I1028 04:44:32.073492    4886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 04:44:32.073529    4886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 04:44:32.073630    4886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 04:44:32.073704    4886 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 04:44:32.073831    4886 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 04:44:31.398983    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:31.398997    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:36.575667    4886 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502630 seconds
	I1028 04:44:36.575730    4886 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 04:44:36.579311    4886 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 04:44:37.099269    4886 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 04:44:37.099638    4886 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-687000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 04:44:37.604300    4886 kubeadm.go:310] [bootstrap-token] Using token: w7krdh.lvwpsl5dc8t7bk4m
	I1028 04:44:37.610935    4886 out.go:235]   - Configuring RBAC rules ...
	I1028 04:44:37.610994    4886 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 04:44:37.611040    4886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 04:44:37.617140    4886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 04:44:37.618627    4886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 04:44:37.620055    4886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 04:44:37.621126    4886 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 04:44:37.624548    4886 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 04:44:37.764610    4886 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 04:44:38.007861    4886 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 04:44:38.008237    4886 kubeadm.go:310] 
	I1028 04:44:38.008266    4886 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 04:44:38.008268    4886 kubeadm.go:310] 
	I1028 04:44:38.008315    4886 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 04:44:38.008321    4886 kubeadm.go:310] 
	I1028 04:44:38.008335    4886 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 04:44:38.008369    4886 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 04:44:38.008401    4886 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 04:44:38.008410    4886 kubeadm.go:310] 
	I1028 04:44:38.008437    4886 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 04:44:38.008440    4886 kubeadm.go:310] 
	I1028 04:44:38.008462    4886 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 04:44:38.008468    4886 kubeadm.go:310] 
	I1028 04:44:38.008497    4886 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 04:44:38.008584    4886 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 04:44:38.008631    4886 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 04:44:38.008633    4886 kubeadm.go:310] 
	I1028 04:44:38.008674    4886 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 04:44:38.008726    4886 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 04:44:38.008730    4886 kubeadm.go:310] 
	I1028 04:44:38.008770    4886 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w7krdh.lvwpsl5dc8t7bk4m \
	I1028 04:44:38.008821    4886 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b1828748577e93ccb806e0aae973ddbc82f94e1a1a028b415724a35e8cf5acf \
	I1028 04:44:38.008851    4886 kubeadm.go:310] 	--control-plane 
	I1028 04:44:38.008857    4886 kubeadm.go:310] 
	I1028 04:44:38.008929    4886 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 04:44:38.008951    4886 kubeadm.go:310] 
	I1028 04:44:38.008991    4886 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w7krdh.lvwpsl5dc8t7bk4m \
	I1028 04:44:38.009041    4886 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b1828748577e93ccb806e0aae973ddbc82f94e1a1a028b415724a35e8cf5acf 
	I1028 04:44:38.009125    4886 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 04:44:38.009143    4886 cni.go:84] Creating CNI manager for ""
	I1028 04:44:38.009155    4886 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:44:38.015424    4886 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 04:44:38.023485    4886 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 04:44:38.026806    4886 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 04:44:38.031686    4886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 04:44:38.031758    4886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 04:44:38.031772    4886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-687000 minikube.k8s.io/updated_at=2024_10_28T04_44_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=running-upgrade-687000 minikube.k8s.io/primary=true
	I1028 04:44:38.035048    4886 ops.go:34] apiserver oom_adj: -16
	I1028 04:44:38.070339    4886 kubeadm.go:1113] duration metric: took 38.618208ms to wait for elevateKubeSystemPrivileges
	I1028 04:44:38.085958    4886 kubeadm.go:394] duration metric: took 4m12.175275667s to StartCluster
	I1028 04:44:38.085976    4886 settings.go:142] acquiring lock: {Name:mkb494d4e656a3be4717ac10e07a477c00ee7ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:44:38.086089    4886 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:44:38.086466    4886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:44:38.086665    4886 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:44:38.086696    4886 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 04:44:38.086770    4886 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-687000"
	I1028 04:44:38.086778    4886 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-687000"
	W1028 04:44:38.086781    4886 addons.go:243] addon storage-provisioner should already be in state true
	I1028 04:44:38.086792    4886 host.go:66] Checking if "running-upgrade-687000" exists ...
	I1028 04:44:38.086811    4886 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-687000"
	I1028 04:44:38.086824    4886 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-687000"
	I1028 04:44:38.086835    4886 config.go:182] Loaded profile config "running-upgrade-687000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:44:38.087831    4886 kapi.go:59] client config for running-upgrade-687000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/running-upgrade-687000/client.key", CAFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10495e680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 04:44:38.088163    4886 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-687000"
	W1028 04:44:38.088169    4886 addons.go:243] addon default-storageclass should already be in state true
	I1028 04:44:38.088176    4886 host.go:66] Checking if "running-upgrade-687000" exists ...
	I1028 04:44:38.089478    4886 out.go:177] * Verifying Kubernetes components...
	I1028 04:44:38.089794    4886 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 04:44:38.093438    4886 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 04:44:38.093447    4886 sshutil.go:53] new ssh client: &{IP:localhost Port:56996 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/running-upgrade-687000/id_rsa Username:docker}
	I1028 04:44:38.097314    4886 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:44:38.101453    4886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:44:38.105416    4886 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 04:44:38.105423    4886 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 04:44:38.105430    4886 sshutil.go:53] new ssh client: &{IP:localhost Port:56996 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/running-upgrade-687000/id_rsa Username:docker}
	I1028 04:44:38.178463    4886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 04:44:38.184303    4886 api_server.go:52] waiting for apiserver process to appear ...
	I1028 04:44:38.184359    4886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:44:38.191285    4886 api_server.go:72] duration metric: took 104.60725ms to wait for apiserver process to appear ...
	I1028 04:44:38.191296    4886 api_server.go:88] waiting for apiserver healthz status ...
	I1028 04:44:38.191306    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:38.204103    4886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 04:44:38.257405    4886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 04:44:38.530483    4886 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 04:44:38.530500    4886 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 04:44:36.400366    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:36.400386    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:43.193454    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:43.193526    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:41.402279    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:41.402327    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:48.193961    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:48.193997    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:46.404692    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:46.404712    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:53.194453    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:53.194500    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:51.406984    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:51.407022    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:58.195048    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:58.195110    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:56.409318    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:56.409358    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:03.195890    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:03.195950    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:01.411706    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:01.411896    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:01.424419    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:01.424512    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:01.435508    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:01.435593    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:01.445887    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:01.445975    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:01.459364    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:01.459450    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:01.469961    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:01.470048    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:01.480724    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:01.480798    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:01.491109    5010 logs.go:282] 0 containers: []
	W1028 04:45:01.491122    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:01.491191    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:01.501707    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:01.501728    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:01.501733    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:01.515617    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:01.515627    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:01.526753    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:01.526766    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:01.539708    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:01.539716    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:01.554131    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:01.554142    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:01.558522    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:01.558531    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:01.600474    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:01.600485    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:01.611838    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:01.611849    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:01.708247    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:01.708257    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:01.729764    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:01.729773    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:01.741428    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:01.741442    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:01.761660    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:01.761671    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:01.779095    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:01.779105    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:01.817744    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:01.817758    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:01.834659    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:01.834670    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:01.846543    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:01.846556    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:01.859427    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:01.859439    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:04.387619    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:08.196931    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:08.197013    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1028 04:45:08.533027    4886 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1028 04:45:08.541299    4886 out.go:177] * Enabled addons: storage-provisioner
	I1028 04:45:08.547288    4886 addons.go:510] duration metric: took 30.460470959s for enable addons: enabled=[storage-provisioner]
	I1028 04:45:09.389940    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:09.390305    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:09.416587    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:09.416717    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:09.433921    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:09.434020    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:09.448718    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:09.448798    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:09.460116    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:09.460194    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:09.470253    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:09.470338    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:09.481459    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:09.481538    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:09.492069    5010 logs.go:282] 0 containers: []
	W1028 04:45:09.492083    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:09.492148    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:09.502232    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:09.502254    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:09.502260    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:09.514105    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:09.514119    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:09.527672    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:09.527684    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:09.558481    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:09.558490    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:09.571030    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:09.571041    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:09.584944    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:09.584960    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:09.599491    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:09.599502    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:09.614695    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:09.614706    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:09.626124    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:09.626135    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:09.638085    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:09.638103    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:09.650185    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:09.650194    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:09.662123    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:09.662137    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:09.698841    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:09.698852    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:09.702839    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:09.702848    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:09.740599    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:09.740615    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:09.778338    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:09.778349    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:09.795203    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:09.795214    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:13.198627    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:13.198694    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:12.314845    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:18.200387    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:18.200441    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:17.317242    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:17.317582    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:17.345705    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:17.345851    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:17.363728    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:17.363818    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:17.377293    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:17.377378    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:17.389143    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:17.389213    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:17.399934    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:17.400007    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:17.410535    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:17.410610    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:17.421042    5010 logs.go:282] 0 containers: []
	W1028 04:45:17.421057    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:17.421113    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:17.431485    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:17.431514    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:17.431521    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:17.436083    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:17.436092    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:17.474949    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:17.474966    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:17.490519    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:17.490529    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:17.502840    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:17.502854    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:17.519985    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:17.519997    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:17.531111    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:17.531123    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:17.554700    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:17.554706    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:17.590332    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:17.590343    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:17.604561    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:17.604570    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:17.615985    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:17.615999    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:17.627576    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:17.627586    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:17.664340    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:17.664348    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:17.678633    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:17.678644    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:17.694552    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:17.694562    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:17.705785    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:17.705796    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:17.719574    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:17.719584    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:20.233958    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:23.202770    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:23.202811    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:28.203767    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:28.203784    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:25.236265    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:25.236428    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:25.253513    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:25.253615    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:25.267004    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:25.267086    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:25.278036    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:25.278111    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:25.288717    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:25.288797    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:25.299374    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:25.299457    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:25.311800    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:25.311874    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:25.322457    5010 logs.go:282] 0 containers: []
	W1028 04:45:25.322473    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:25.322545    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:25.333541    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:25.333562    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:25.333567    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:25.373809    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:25.373823    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:25.392908    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:25.392920    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:25.409568    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:25.409579    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:25.447086    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:25.447097    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:25.464908    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:25.464919    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:25.477417    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:25.477427    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:25.488848    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:25.488859    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:25.514460    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:25.514468    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:25.518819    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:25.518828    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:25.532621    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:25.532630    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:25.549580    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:25.549591    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:25.560639    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:25.560650    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:25.572327    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:25.572339    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:25.583536    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:25.583546    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:25.622722    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:25.622732    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:25.637920    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:25.637929    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:28.151934    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:33.205950    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:33.205976    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:33.153521    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:33.153806    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:33.181354    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:33.181458    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:33.198507    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:33.198595    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:33.210823    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:33.210900    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:33.221571    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:33.221654    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:33.232584    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:33.232660    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:33.243352    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:33.243430    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:33.253616    5010 logs.go:282] 0 containers: []
	W1028 04:45:33.253628    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:33.253687    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:33.264012    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:33.264030    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:33.264035    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:33.300796    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:33.300806    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:33.338732    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:33.338743    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:33.356103    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:33.356113    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:33.371644    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:33.371655    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:33.383319    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:33.383331    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:33.387668    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:33.387679    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:33.401583    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:33.401593    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:33.415760    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:33.415770    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:33.426508    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:33.426519    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:33.438222    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:33.438232    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:33.451972    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:33.451986    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:33.469480    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:33.469493    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:33.482483    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:33.482494    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:33.519942    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:33.519953    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:33.532427    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:33.532441    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:33.545019    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:33.545029    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:38.208198    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:38.208374    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:38.219557    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:45:38.219638    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:38.233808    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:45:38.233888    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:38.244747    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:45:38.244824    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:38.255067    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:45:38.255141    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:38.265326    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:45:38.265408    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:38.275772    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:45:38.275844    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:38.286474    4886 logs.go:282] 0 containers: []
	W1028 04:45:38.286485    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:38.286546    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:38.296661    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:45:38.296676    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:38.296681    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:38.333285    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:45:38.333295    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:45:38.348039    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:45:38.348053    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:45:38.365010    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:45:38.365021    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:45:38.376314    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:45:38.376325    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:45:38.388053    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:45:38.388065    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:45:38.402164    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:38.402178    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:38.406673    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:38.406680    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:38.441640    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:45:38.441653    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:45:38.453456    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:45:38.453465    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:45:38.471381    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:45:38.471392    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:45:38.490746    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:38.490755    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:38.514368    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:45:38.514376    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:36.071658    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:41.028948    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:41.074071    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:41.074399    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:41.105562    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:41.105714    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:41.123431    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:41.123536    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:41.137077    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:41.137170    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:41.149585    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:41.149665    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:41.161182    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:41.161263    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:41.172059    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:41.172135    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:41.182459    5010 logs.go:282] 0 containers: []
	W1028 04:45:41.182476    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:41.182546    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:41.199661    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:41.199681    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:41.199688    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:41.215436    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:41.215447    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:41.234043    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:41.234054    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:41.248548    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:41.248562    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:41.259857    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:41.259869    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:41.264139    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:41.264146    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:41.279560    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:41.279574    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:41.316873    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:41.316883    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:41.341644    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:41.341656    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:41.355328    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:41.355345    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:41.390703    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:41.390713    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:41.402819    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:41.402830    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:41.415070    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:41.415079    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:41.428615    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:41.428631    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:41.444557    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:41.444571    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:41.480541    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:41.480549    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:41.494298    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:41.494311    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:44.014232    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:46.030902    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:46.031130    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:46.050272    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:45:46.050373    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:46.064530    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:45:46.064622    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:46.075509    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:45:46.075582    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:46.085653    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:45:46.085729    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:46.096255    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:45:46.096328    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:46.106760    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:45:46.106829    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:46.121717    4886 logs.go:282] 0 containers: []
	W1028 04:45:46.121735    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:46.121803    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:46.132705    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:45:46.132724    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:46.132729    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:46.169751    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:45:46.169763    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:45:46.185791    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:45:46.185804    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:46.197153    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:45:46.197167    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:45:46.209434    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:45:46.209446    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:45:46.225350    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:45:46.225366    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:45:46.239888    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:45:46.239901    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:45:46.258454    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:46.258467    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:46.263203    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:46.263211    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:46.299753    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:45:46.299765    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:45:46.313783    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:45:46.313793    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:45:46.328640    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:45:46.328653    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:45:46.340631    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:46.340643    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:48.868163    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:49.016689    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:49.016977    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:49.033037    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:49.033140    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:49.045887    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:49.045972    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:49.056824    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:49.056890    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:49.069212    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:49.069286    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:49.079611    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:49.079675    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:49.090501    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:49.090573    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:49.100510    5010 logs.go:282] 0 containers: []
	W1028 04:45:49.100525    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:49.100592    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:49.111094    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:49.111112    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:49.111117    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:49.121954    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:49.121963    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:49.133540    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:49.133550    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:49.154979    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:49.154988    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:49.167446    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:49.167455    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:49.181500    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:49.181509    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:49.195038    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:49.195046    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:49.209317    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:49.209326    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:49.220657    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:49.220668    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:49.236679    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:49.236688    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:49.274352    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:49.274369    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:49.318139    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:49.318150    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:49.329842    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:49.329854    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:49.342013    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:49.342024    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:49.353608    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:49.353618    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:49.357978    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:49.357987    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:49.396506    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:49.396517    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:53.870933    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:53.871196    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:53.895918    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:45:53.896046    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:53.912704    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:45:53.912806    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:53.925918    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:45:53.926002    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:53.937006    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:45:53.937084    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:53.947141    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:45:53.947208    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:53.957445    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:45:53.957513    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:53.967642    4886 logs.go:282] 0 containers: []
	W1028 04:45:53.967654    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:53.967717    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:53.978178    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:45:53.978196    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:53.978201    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:54.014212    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:45:54.014222    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:45:54.031104    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:45:54.031114    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:45:54.046295    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:45:54.046307    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:45:54.059336    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:45:54.059348    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:45:54.073401    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:45:54.073412    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:45:54.084924    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:45:54.084936    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:45:54.102976    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:54.102988    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:54.141278    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:54.141289    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:54.146278    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:45:54.146284    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:45:54.167080    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:45:54.167091    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:45:54.178959    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:54.178970    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:54.203917    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:45:54.203928    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:51.923951    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:56.718657    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:56.925463    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:56.925658    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:56.949180    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:56.949307    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:56.966537    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:56.966633    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:56.978931    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:56.979017    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:56.990090    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:56.990169    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:57.000635    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:57.000715    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:57.011509    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:57.011587    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:57.021989    5010 logs.go:282] 0 containers: []
	W1028 04:45:57.022001    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:57.022060    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:57.032631    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:57.032653    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:57.032659    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:57.057651    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:57.057659    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:57.069211    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:57.069223    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:57.073595    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:57.073601    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:57.109358    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:57.109369    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:57.120797    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:57.120810    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:57.134631    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:57.134644    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:57.150508    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:57.150518    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:57.165645    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:57.165657    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:57.205488    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:57.205499    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:57.218037    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:57.218047    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:57.229230    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:57.229241    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:57.243024    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:57.243033    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:57.258587    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:57.258598    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:57.295024    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:57.295037    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:57.310352    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:57.310362    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:57.322831    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:57.322842    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:59.842586    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:01.721163    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:01.721609    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:01.756222    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:01.756377    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:01.776635    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:01.776747    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:01.792316    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:01.792407    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:01.805554    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:01.805634    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:01.817602    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:01.817676    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:01.828345    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:01.828426    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:01.842389    4886 logs.go:282] 0 containers: []
	W1028 04:46:01.842401    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:01.842467    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:01.853811    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:01.853826    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:01.853832    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:01.865790    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:01.865802    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:01.877308    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:01.877319    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:01.911751    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:01.911761    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:01.926191    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:01.926202    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:01.937886    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:01.937897    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:01.949704    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:01.949719    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:01.968497    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:01.968508    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:01.992389    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:01.992400    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:02.028762    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:02.028770    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:02.033434    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:02.033442    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:02.048073    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:02.048086    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:02.062696    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:02.062707    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:04.845353    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:04.845749    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:04.879590    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:04.879730    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:04.898484    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:04.898570    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:04.912736    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:04.912825    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:04.925614    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:04.925695    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:04.936489    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:04.936567    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:04.947441    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:04.947525    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:04.957786    5010 logs.go:282] 0 containers: []
	W1028 04:46:04.957802    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:04.957861    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:04.969062    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:04.969080    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:04.969085    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:04.980874    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:04.980885    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:04.998535    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:04.998545    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:05.034427    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:05.034438    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:05.048768    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:05.048782    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:05.062468    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:05.062480    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:05.101344    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:05.101357    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:05.116270    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:05.116282    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:05.128323    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:05.128337    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:05.147914    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:05.147925    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:05.161501    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:05.161512    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:05.166221    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:05.166229    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:05.182226    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:05.182238    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:05.215839    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:05.215854    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:05.231416    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:05.231428    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:04.576852    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:05.254831    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:05.254840    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:05.290883    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:05.290891    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:07.808496    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:09.579215    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:09.579465    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:09.601604    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:09.601708    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:09.616481    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:09.616572    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:09.629183    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:09.629265    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:09.640038    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:09.640115    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:09.650925    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:09.650996    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:09.661679    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:09.661755    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:09.672460    4886 logs.go:282] 0 containers: []
	W1028 04:46:09.672472    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:09.672540    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:09.683266    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:09.683285    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:09.683293    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:09.718007    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:09.718020    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:09.732343    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:09.732357    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:09.746465    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:09.746475    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:09.757787    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:09.757798    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:09.772093    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:09.772104    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:09.783826    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:09.783839    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:09.798200    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:09.798211    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:09.835937    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:09.835947    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:09.849603    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:09.849615    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:09.868724    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:09.868742    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:09.880105    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:09.880119    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:09.905635    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:09.905643    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:12.412203    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:12.810919    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:12.811168    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:12.833832    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:12.833955    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:12.849570    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:12.849666    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:12.862011    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:12.862094    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:12.873318    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:12.873396    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:12.883984    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:12.884062    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:12.894842    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:12.894930    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:12.906403    5010 logs.go:282] 0 containers: []
	W1028 04:46:12.906415    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:12.906487    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:12.921370    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:12.921391    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:12.921398    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:12.937551    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:12.937560    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:12.973971    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:12.973985    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:13.010014    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:13.010028    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:13.024621    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:13.024631    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:13.039222    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:13.039232    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:13.060635    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:13.060651    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:13.074006    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:13.074016    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:13.085647    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:13.085657    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:13.097784    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:13.097793    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:13.109218    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:13.109229    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:13.114039    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:13.114045    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:13.131228    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:13.131237    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:13.155950    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:13.155959    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:13.193995    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:13.194005    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:13.206165    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:13.206176    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:13.218501    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:13.218512    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:17.414482    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:17.414703    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:17.433520    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:17.433620    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:17.447280    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:17.447358    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:17.458046    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:17.458115    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:17.468914    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:17.468989    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:17.479562    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:17.479648    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:17.490224    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:17.490292    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:17.500962    4886 logs.go:282] 0 containers: []
	W1028 04:46:17.500974    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:17.501037    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:17.514525    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:17.514540    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:17.514546    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:17.526218    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:17.526231    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:17.563996    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:17.564004    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:17.568382    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:17.568390    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:17.582918    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:17.582930    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:17.595013    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:17.595027    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:17.607116    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:17.607126    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:17.624737    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:17.624748    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:17.663201    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:17.663215    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:17.676638    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:17.676648    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:17.690813    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:17.690826    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:17.702985    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:17.702996    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:17.714972    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:17.714984    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:15.741820    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:20.239854    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:20.744154    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:20.744375    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:20.768921    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:20.769031    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:20.783301    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:20.783390    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:20.797458    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:20.797531    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:20.808227    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:20.808310    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:20.818421    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:20.818495    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:20.828708    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:20.828788    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:20.838766    5010 logs.go:282] 0 containers: []
	W1028 04:46:20.838777    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:20.838835    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:20.848996    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:20.849018    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:20.849023    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:20.888283    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:20.888296    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:20.900402    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:20.900414    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:20.919101    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:20.919113    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:20.943854    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:20.943862    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:20.981863    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:20.981875    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:20.996994    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:20.997006    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:21.008757    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:21.008766    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:21.020599    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:21.020610    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:21.024771    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:21.024778    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:21.065116    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:21.065126    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:21.076499    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:21.076512    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:21.094185    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:21.094195    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:21.106640    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:21.106650    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:21.123635    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:21.123647    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:21.138341    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:21.138352    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:21.149898    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:21.149910    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:23.665852    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:25.242363    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:25.242818    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:25.278966    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:25.279103    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:25.298680    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:25.298779    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:25.312623    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:25.312710    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:25.325006    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:25.325089    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:25.337789    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:25.337874    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:25.352483    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:25.352564    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:25.363298    4886 logs.go:282] 0 containers: []
	W1028 04:46:25.363315    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:25.363386    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:25.374012    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:25.374027    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:25.374033    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:25.388858    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:25.388875    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:25.404786    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:25.404797    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:25.417508    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:25.417519    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:25.433980    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:25.433990    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:25.458739    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:25.458748    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:25.477193    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:25.477203    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:25.489577    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:25.489593    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:25.501356    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:25.501366    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:25.539988    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:25.539997    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:25.545296    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:25.545305    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:25.579563    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:25.579575    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:25.591523    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:25.591535    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:28.116038    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:28.668152    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:28.668300    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:28.681649    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:28.681742    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:28.692924    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:28.693008    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:28.703154    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:28.703234    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:28.713853    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:28.713928    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:28.724214    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:28.724295    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:28.734923    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:28.734998    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:28.745549    5010 logs.go:282] 0 containers: []
	W1028 04:46:28.745561    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:28.745626    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:28.756045    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:28.756067    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:28.756073    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:28.760183    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:28.760192    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:28.794099    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:28.794113    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:28.809023    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:28.809033    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:28.820685    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:28.820695    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:28.844597    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:28.844605    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:28.882839    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:28.882847    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:28.920542    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:28.920553    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:28.934255    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:28.934264    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:28.945541    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:28.945552    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:28.966619    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:28.966629    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:28.980904    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:28.980913    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:28.995856    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:28.995865    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:29.007561    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:29.007570    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:29.023301    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:29.023311    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:29.034897    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:29.034909    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:29.045956    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:29.045967    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:33.118398    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:33.118693    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:33.145080    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:33.145224    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:33.162560    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:33.162661    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:33.177196    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:33.177281    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:33.188547    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:33.188625    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:33.198745    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:33.198817    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:33.212649    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:33.212715    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:33.222657    4886 logs.go:282] 0 containers: []
	W1028 04:46:33.222670    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:33.222732    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:33.233070    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:33.233087    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:33.233092    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:33.270988    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:33.270999    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:33.305424    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:33.305436    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:33.316996    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:33.317008    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:33.328172    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:33.328184    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:33.353273    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:33.353282    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:33.365042    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:33.365053    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:33.383029    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:33.383040    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:33.387947    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:33.387956    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:33.402527    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:33.402538    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:33.416566    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:33.416581    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:33.427712    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:33.427723    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:33.441417    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:33.441432    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:31.560119    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:35.959086    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:36.562478    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:36.562586    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:36.574046    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:36.574127    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:36.584634    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:36.584706    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:36.595111    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:36.595189    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:36.605381    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:36.605457    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:36.616113    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:36.616189    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:36.627366    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:36.627449    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:36.645087    5010 logs.go:282] 0 containers: []
	W1028 04:46:36.645099    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:36.645158    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:36.655709    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:36.655728    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:36.655733    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:36.690385    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:36.690396    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:36.705443    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:36.705454    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:36.743342    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:36.743355    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:36.759037    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:36.759048    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:36.780628    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:36.780638    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:36.820349    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:36.820359    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:36.832710    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:36.832720    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:36.843888    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:36.843898    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:36.856310    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:36.856320    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:36.869448    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:36.869458    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:36.883849    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:36.883860    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:36.896012    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:36.896023    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:36.908210    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:36.908220    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:36.926124    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:36.926133    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:36.938014    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:36.938024    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:36.961890    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:36.961897    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:39.468425    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:40.961407    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:40.961621    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:40.978974    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:40.979078    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:40.991850    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:40.991935    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:41.002628    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:41.002703    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:41.013527    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:41.013599    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:41.023980    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:41.024051    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:41.034610    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:41.034693    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:41.045224    4886 logs.go:282] 0 containers: []
	W1028 04:46:41.045237    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:41.045304    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:41.055755    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:41.055772    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:41.055777    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:41.067310    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:41.067320    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:41.081289    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:41.081302    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:41.097950    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:41.097961    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:41.121727    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:41.121734    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:41.133051    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:41.133061    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:41.169448    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:41.169462    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:41.181957    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:41.181971    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:41.196824    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:41.196836    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:41.214574    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:41.214587    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:41.231690    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:41.231703    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:41.243490    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:41.243501    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:41.280963    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:41.280973    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:43.787593    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:44.469326    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:44.469457    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:44.483452    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:44.483541    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:44.494613    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:44.494696    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:44.505554    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:44.505627    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:44.515674    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:44.515754    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:44.526618    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:44.526697    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:44.537561    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:44.537639    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:44.548186    5010 logs.go:282] 0 containers: []
	W1028 04:46:44.548198    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:44.548263    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:44.558778    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:44.558802    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:44.558807    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:44.595761    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:44.595769    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:44.599909    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:44.599915    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:44.636619    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:44.636629    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:44.648032    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:44.648046    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:44.659873    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:44.659883    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:44.694259    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:44.694270    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:44.709333    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:44.709343    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:44.720815    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:44.720824    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:44.732903    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:44.732913    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:44.749083    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:44.749092    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:44.771191    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:44.771200    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:44.788202    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:44.788214    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:44.802743    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:44.802752    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:44.814208    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:44.814221    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:44.825990    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:44.826000    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:44.843665    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:44.843678    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:48.790350    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:48.790820    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:48.832641    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:48.832803    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:48.858118    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:48.858237    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:48.876745    4886 logs.go:282] 2 containers: [e6b675482666 3bc718a2c833]
	I1028 04:46:48.876823    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:48.888114    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:48.888186    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:48.898442    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:48.898512    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:48.909126    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:48.909208    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:48.919667    4886 logs.go:282] 0 containers: []
	W1028 04:46:48.919680    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:48.919749    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:48.930000    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:48.930014    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:48.930020    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:48.944494    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:48.944509    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:48.958240    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:48.958252    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:48.977473    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:48.977487    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:48.989215    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:48.989230    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:49.001804    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:49.001815    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:49.025336    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:49.025346    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:49.037284    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:49.037294    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:49.075262    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:49.075270    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:49.113441    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:49.113453    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:49.132044    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:49.132055    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:49.154728    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:49.154739    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:49.166448    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:49.166459    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:47.361040    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:51.673150    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:52.363805    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:52.364099    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:52.392847    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:52.392973    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:52.414678    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:52.414775    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:52.427470    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:52.427553    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:52.439239    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:52.439322    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:52.451551    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:52.451626    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:52.462422    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:52.462488    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:52.472804    5010 logs.go:282] 0 containers: []
	W1028 04:46:52.472816    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:52.472883    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:52.483984    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:52.484003    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:52.484008    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:52.500701    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:52.500712    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:52.523442    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:52.523452    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:52.561784    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:52.561795    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:52.575958    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:52.575969    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:52.590382    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:52.590392    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:52.602220    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:52.602230    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:52.620775    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:52.620790    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:52.633052    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:52.633065    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:52.644723    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:52.644733    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:52.656235    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:52.656249    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:52.692624    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:52.692633    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:52.727437    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:52.727448    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:52.745195    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:52.745205    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:52.761599    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:52.761609    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:52.776787    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:52.776798    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:52.780805    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:52.780811    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:56.675511    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:56.675749    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:56.697302    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:46:56.697419    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:56.712793    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:46:56.712872    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:56.728968    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:46:56.729055    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:56.739657    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:46:56.739741    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:56.750059    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:46:56.750135    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:56.760764    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:46:56.760836    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:56.771003    4886 logs.go:282] 0 containers: []
	W1028 04:46:56.771015    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:56.771078    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:56.781783    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:46:56.781804    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:46:56.781809    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:46:56.797194    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:46:56.797206    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:46:56.811403    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:46:56.811418    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:46:56.823060    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:46:56.823069    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:56.834738    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:46:56.834749    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:46:56.846750    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:46:56.846762    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:46:56.864155    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:56.864166    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:56.935634    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:46:56.935648    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:46:56.948665    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:46:56.948676    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:46:56.968465    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:46:56.968475    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:46:56.980153    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:56.980168    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:57.005069    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:57.005077    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:57.042566    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:57.042577    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:57.047069    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:46:57.047076    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:46:57.058636    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:46:57.058648    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:46:55.294540    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:59.579070    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:00.297288    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:00.297480    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:00.316840    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:00.316937    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:00.329341    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:00.329415    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:00.340234    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:00.340311    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:00.350639    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:00.350721    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:00.365178    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:00.365249    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:00.375829    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:00.375909    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:00.386606    5010 logs.go:282] 0 containers: []
	W1028 04:47:00.386631    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:00.386695    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:00.397500    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:00.397523    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:00.397528    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:00.411907    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:00.411917    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:00.423723    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:00.423735    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:00.439433    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:00.439442    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:00.450886    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:00.450898    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:00.455392    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:00.455400    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:00.491904    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:00.491916    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:00.503733    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:00.503749    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:00.525910    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:00.525921    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:00.551438    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:00.551448    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:00.563905    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:00.563917    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:00.603682    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:00.603694    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:00.622785    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:00.622794    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:00.637934    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:00.637945    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:00.653469    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:00.653478    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:00.664815    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:00.664827    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:00.703550    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:00.703562    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:03.217556    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:04.579831    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:04.579955    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:04.590717    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:04.590792    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:04.601654    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:04.601731    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:04.612764    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:04.612851    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:04.623306    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:04.623374    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:04.633723    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:04.633793    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:04.644110    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:04.644184    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:04.654226    4886 logs.go:282] 0 containers: []
	W1028 04:47:04.654244    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:04.654310    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:04.665208    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:04.665224    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:04.665230    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:04.670267    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:04.670275    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:04.716981    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:04.716995    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:04.729670    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:04.729682    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:04.748742    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:04.748755    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:04.760305    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:04.760315    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:04.785369    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:04.785379    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:04.799447    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:04.799460    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:04.814063    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:04.814076    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:04.832081    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:04.832090    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:04.868257    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:04.868265    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:04.886308    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:04.886318    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:04.901737    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:04.901752    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:04.914032    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:04.914044    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:04.928269    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:04.928279    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:07.442034    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:08.219904    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:08.220084    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:08.235958    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:08.236064    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:08.248988    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:08.249065    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:08.261987    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:08.262061    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:08.272141    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:08.272220    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:08.282658    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:08.282727    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:08.293654    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:08.293730    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:08.303548    5010 logs.go:282] 0 containers: []
	W1028 04:47:08.303559    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:08.303619    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:08.314411    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:08.314431    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:08.314436    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:08.326264    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:08.326273    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:08.359681    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:08.359690    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:08.399003    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:08.399012    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:08.433433    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:08.433445    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:08.447790    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:08.447801    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:08.459227    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:08.459237    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:08.470538    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:08.470548    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:08.495178    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:08.495189    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:08.499743    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:08.499750    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:08.513509    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:08.513518    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:08.529107    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:08.529116    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:08.543099    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:08.543113    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:08.554889    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:08.554898    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:08.569291    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:08.569305    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:08.611447    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:08.611458    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:08.623102    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:08.623115    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:12.444456    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:12.444715    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:12.468084    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:12.468209    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:12.484926    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:12.485022    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:12.498376    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:12.498458    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:12.509921    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:12.509991    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:12.521593    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:12.521671    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:12.532050    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:12.532128    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:12.541666    4886 logs.go:282] 0 containers: []
	W1028 04:47:12.541681    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:12.541739    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:12.552901    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:12.552922    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:12.552927    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:12.567350    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:12.567360    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:12.579558    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:12.579569    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:12.604087    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:12.604103    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:12.615612    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:12.615622    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:12.627276    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:12.627287    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:12.662842    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:12.662856    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:12.667245    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:12.667252    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:12.701945    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:12.701956    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:12.714174    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:12.714186    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:12.729105    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:12.729115    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:12.741340    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:12.741353    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:12.755876    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:12.755887    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:12.766951    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:12.766961    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:12.777955    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:12.777966    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:11.136006    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:15.298037    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:16.138364    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:16.138521    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:16.151688    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:16.151766    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:16.162459    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:16.162544    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:16.173485    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:16.173565    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:16.184106    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:16.184183    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:16.195289    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:16.195367    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:16.209175    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:16.209254    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:16.219625    5010 logs.go:282] 0 containers: []
	W1028 04:47:16.219638    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:16.219703    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:16.236724    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:16.236744    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:16.236750    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:16.275144    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:16.275156    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:16.287437    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:16.287449    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:16.298833    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:16.298844    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:16.310459    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:16.310473    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:16.324643    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:16.324653    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:16.336812    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:16.336825    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:16.351516    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:16.351529    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:16.366297    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:16.366307    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:16.378475    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:16.378486    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:16.399177    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:16.399187    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:16.436149    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:16.436161    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:16.440445    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:16.440454    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:16.475606    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:16.475617    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:16.489894    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:16.489905    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:16.504076    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:16.504089    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:16.515458    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:16.515469    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:19.040219    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:20.300855    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:20.301327    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:20.336198    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:20.336347    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:20.356470    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:20.356567    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:20.371429    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:20.371515    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:20.384061    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:20.384144    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:20.395165    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:20.395240    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:20.406413    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:20.406498    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:20.419950    4886 logs.go:282] 0 containers: []
	W1028 04:47:20.419963    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:20.420035    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:20.431232    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:20.431251    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:20.431256    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:20.447333    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:20.447348    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:20.460147    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:20.460158    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:20.475178    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:20.475190    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:20.498183    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:20.498194    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:20.511205    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:20.511217    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:20.525009    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:20.525021    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:20.530392    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:20.530399    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:20.568060    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:20.568069    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:20.582498    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:20.582510    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:20.599273    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:20.599283    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:20.621439    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:20.621449    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:20.633209    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:20.633224    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:20.669776    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:20.669787    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:20.682079    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:20.682091    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:23.210144    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:24.042830    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:24.043008    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:24.055456    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:24.055538    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:24.065924    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:24.066035    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:24.077122    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:24.077202    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:24.087447    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:24.087527    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:24.097897    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:24.097981    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:24.108647    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:24.108724    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:24.119040    5010 logs.go:282] 0 containers: []
	W1028 04:47:24.119051    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:24.119114    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:24.129668    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:24.129689    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:24.129698    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:24.141809    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:24.141823    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:24.156017    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:24.156029    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:24.167300    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:24.167310    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:24.178634    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:24.178645    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:24.193401    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:24.193411    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:24.212142    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:24.212152    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:24.225960    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:24.225971    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:24.241152    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:24.241163    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:24.253453    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:24.253462    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:24.258128    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:24.258134    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:24.293692    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:24.293703    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:24.331461    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:24.331472    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:24.345480    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:24.345491    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:24.357052    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:24.357061    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:24.368990    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:24.369000    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:24.392409    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:24.392417    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:28.212526    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:28.212720    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:28.229616    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:28.229712    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:28.242182    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:28.242263    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:28.253813    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:28.253899    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:28.265031    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:28.265105    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:28.275173    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:28.275251    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:28.285770    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:28.285845    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:28.296073    4886 logs.go:282] 0 containers: []
	W1028 04:47:28.296086    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:28.296148    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:28.306268    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:28.306286    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:28.306291    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:28.317839    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:28.317855    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:28.354885    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:28.354898    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:28.369581    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:28.369593    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:28.380901    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:28.380914    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:28.419441    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:28.419450    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:28.430986    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:28.431001    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:28.448370    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:28.448386    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:28.462538    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:28.462553    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:28.473819    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:28.473830    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:28.486228    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:28.486243    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:28.504390    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:28.504400    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:28.528087    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:28.528095    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:28.532078    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:28.532085    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:28.543943    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:28.543960    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:26.933108    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:31.056897    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:31.935433    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:31.935612    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:31.947726    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:31.947803    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:31.957929    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:31.958005    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:31.968033    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:31.968109    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:31.984390    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:31.984470    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:31.998969    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:31.999040    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:32.009707    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:32.009784    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:32.020204    5010 logs.go:282] 0 containers: []
	W1028 04:47:32.020217    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:32.020286    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:32.030911    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:32.030929    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:32.030935    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:32.045546    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:32.045557    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:32.056916    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:32.056927    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:32.068836    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:32.068846    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:32.082257    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:32.082268    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:32.093832    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:32.093843    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:32.104987    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:32.105001    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:32.143135    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:32.143143    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:32.180840    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:32.180850    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:32.218831    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:32.218842    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:32.231155    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:32.231165    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:32.248715    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:32.248727    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:32.253139    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:32.253149    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:32.264555    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:32.264567    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:32.279239    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:32.279251    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:32.292502    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:32.292511    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:32.315416    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:32.315423    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:34.831201    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:36.057617    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:36.057837    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:36.075346    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:36.075437    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:36.089105    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:36.089192    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:36.101796    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:36.101887    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:36.112217    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:36.112291    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:36.122740    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:36.122818    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:36.133489    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:36.133565    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:36.143406    4886 logs.go:282] 0 containers: []
	W1028 04:47:36.143418    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:36.143486    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:36.164952    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:36.164971    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:36.164978    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:36.190069    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:36.190079    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:36.204619    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:36.204630    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:36.216613    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:36.216625    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:36.231144    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:36.231156    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:36.243088    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:36.243099    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:36.261535    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:36.261546    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:36.272882    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:36.272894    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:36.311122    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:36.311130    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:36.330337    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:36.330351    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:36.354278    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:36.354287    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:36.366002    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:36.366014    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:36.370287    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:36.370297    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:36.405543    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:36.405557    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:36.419890    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:36.419938    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:38.940192    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:39.833578    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:39.833901    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:39.863748    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:39.863883    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:39.881497    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:39.881596    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:39.895811    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:39.895897    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:39.910110    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:39.910190    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:39.920891    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:39.920955    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:39.932378    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:39.932460    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:39.942723    5010 logs.go:282] 0 containers: []
	W1028 04:47:39.942739    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:39.942801    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:39.953198    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:39.953214    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:39.953219    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:39.967326    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:39.967339    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:39.983274    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:39.983286    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:40.000653    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:40.000663    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:40.012468    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:40.012481    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:40.047528    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:40.047540    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:40.066924    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:40.066935    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:40.078531    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:40.078541    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:40.092520    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:40.092530    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:40.115221    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:40.115236    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:40.119420    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:40.119427    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:40.133528    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:40.133537    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:40.145382    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:40.145391    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:40.182384    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:40.182398    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:40.194480    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:40.194494    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:40.208629    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:40.208644    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:40.220505    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:40.220520    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:43.942470    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:43.942706    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:43.965506    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:43.965635    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:43.981147    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:43.981247    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:43.993943    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:43.994028    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:44.005441    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:44.005512    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:44.023759    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:44.023830    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:44.033937    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:44.034002    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:44.044811    4886 logs.go:282] 0 containers: []
	W1028 04:47:44.044824    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:44.044890    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:44.056037    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:44.056059    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:44.056068    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:44.067582    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:44.067594    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:44.079311    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:44.079322    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:44.092359    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:44.092371    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:44.115497    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:44.115505    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:44.134182    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:44.134193    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:44.145695    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:44.145705    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:44.182836    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:44.182851    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:44.197731    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:44.197742    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:44.210134    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:44.210145    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:44.215047    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:44.215054    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:44.250490    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:44.250503    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:44.262612    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:44.262623    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:44.276947    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:44.276956    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:44.291779    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:44.291791    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:42.760103    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:46.812151    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:47.762668    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:47.763047    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:47.797419    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:47.797653    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:47.820100    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:47.820229    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:47.836382    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:47.836481    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:47.852070    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:47.852153    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:47.863937    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:47.864028    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:47.877101    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:47.877177    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:47.887552    5010 logs.go:282] 0 containers: []
	W1028 04:47:47.887562    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:47.887621    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:47.898208    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:47.898226    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:47.898231    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:47.922701    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:47.922713    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:47.945592    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:47.945604    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:47.969842    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:47.969851    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:47.981205    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:47.981218    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:47.995482    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:47.995494    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:48.033045    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:48.033054    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:48.044789    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:48.044802    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:48.056687    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:48.056697    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:48.068672    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:48.068682    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:48.079416    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:48.079428    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:48.101661    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:48.101674    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:48.138407    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:48.138416    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:48.174153    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:48.174166    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:48.189124    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:48.189133    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:48.193869    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:48.193877    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:48.209024    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:48.209035    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:51.814666    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:51.815140    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:51.848790    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:51.848949    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:51.868984    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:51.869085    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:51.894828    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:51.894908    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:51.905714    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:51.905792    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:51.916319    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:51.916397    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:51.929937    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:51.930018    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:51.941199    4886 logs.go:282] 0 containers: []
	W1028 04:47:51.941211    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:51.941278    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:51.956568    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:51.956586    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:51.956592    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:51.974300    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:51.974311    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:51.978859    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:51.978869    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:51.993177    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:47:51.993190    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:47:52.009030    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:52.009043    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:52.025254    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:47:52.025267    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:47:52.039536    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:52.039549    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:52.053249    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:52.053261    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:52.093905    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:52.093919    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:52.106324    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:52.106335    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:52.119191    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:52.119201    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:52.155131    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:52.155142    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:52.167972    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:47:52.167984    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:47:52.180026    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:52.180036    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:52.205362    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:47:52.205375    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:50.726334    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:54.719704    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:55.728846    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:55.729438    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:55.770223    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:55.770377    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:55.791423    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:55.791534    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:55.809583    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:55.809673    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:55.828549    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:55.828629    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:55.839015    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:55.839101    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:55.849954    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:55.850031    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:55.861385    5010 logs.go:282] 0 containers: []
	W1028 04:47:55.861397    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:55.861461    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:55.876952    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:55.876971    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:55.876977    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:55.894717    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:55.894726    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:55.907241    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:55.907252    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:55.935044    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:55.935056    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:55.939930    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:55.939941    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:55.978156    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:55.978166    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:55.992074    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:55.992087    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:56.003446    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:56.003457    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:56.040718    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:56.040729    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:56.055113    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:56.055123    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:56.066678    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:56.066689    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:56.090090    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:56.090099    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:56.101636    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:56.101645    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:56.144763    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:56.144772    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:56.160986    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:56.160996    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:56.172962    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:56.172971    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:56.188877    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:56.188887    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:58.703865    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:59.722533    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:59.722980    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:59.761492    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:47:59.761644    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:59.784964    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:47:59.785087    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:59.803343    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:47:59.803437    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:59.814993    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:47:59.815068    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:59.836192    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:47:59.836263    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:59.847117    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:47:59.847197    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:59.860979    4886 logs.go:282] 0 containers: []
	W1028 04:47:59.860990    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:59.861055    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:59.871627    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:47:59.871650    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:47:59.871656    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:47:59.886140    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:47:59.886153    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:47:59.898121    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:47:59.898135    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:47:59.916041    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:47:59.916055    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:47:59.930788    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:47:59.930798    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:47:59.942913    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:47:59.942922    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:47:59.963961    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:47:59.963973    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:47:59.976332    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:59.976343    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:00.002364    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:00.002384    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:00.015624    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:00.015643    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:00.020706    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:00.020719    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:00.033310    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:00.033326    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:00.047531    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:00.047541    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:00.059194    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:00.059205    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:00.096742    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:00.096751    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:02.634232    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:03.706285    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:03.706411    5010 kubeadm.go:597] duration metric: took 4m3.961812083s to restartPrimaryControlPlane
	W1028 04:48:03.706497    5010 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 04:48:03.706535    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1028 04:48:04.785041    5010 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.078488625s)
	I1028 04:48:04.785120    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 04:48:04.790752    5010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 04:48:04.793804    5010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 04:48:04.797027    5010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 04:48:04.797033    5010 kubeadm.go:157] found existing configuration files:
	
	I1028 04:48:04.797077    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/admin.conf
	I1028 04:48:04.799825    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 04:48:04.799859    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 04:48:04.802397    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/kubelet.conf
	I1028 04:48:04.805190    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 04:48:04.805241    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 04:48:04.808405    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/controller-manager.conf
	I1028 04:48:04.811181    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 04:48:04.811211    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 04:48:04.813744    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/scheduler.conf
	I1028 04:48:04.816778    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 04:48:04.816809    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 04:48:04.819783    5010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 04:48:04.841724    5010 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1028 04:48:04.841800    5010 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 04:48:04.894801    5010 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 04:48:04.894860    5010 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 04:48:04.894909    5010 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 04:48:04.943155    5010 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 04:48:04.949280    5010 out.go:235]   - Generating certificates and keys ...
	I1028 04:48:04.949314    5010 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 04:48:04.949353    5010 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 04:48:04.949397    5010 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 04:48:04.949430    5010 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 04:48:04.949467    5010 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 04:48:04.949505    5010 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 04:48:04.949546    5010 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 04:48:04.949579    5010 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 04:48:04.949619    5010 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 04:48:04.949659    5010 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 04:48:04.949677    5010 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 04:48:04.949717    5010 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 04:48:05.099764    5010 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 04:48:05.224647    5010 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 04:48:05.313779    5010 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 04:48:05.379561    5010 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 04:48:05.409866    5010 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 04:48:05.410290    5010 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 04:48:05.410315    5010 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 04:48:05.494741    5010 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 04:48:07.636565    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:07.636682    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:48:07.647961    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:48:07.648043    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:48:07.660252    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:48:07.660334    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:48:07.672146    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:48:07.672229    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:48:07.683411    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:48:07.683492    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:48:07.699344    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:48:07.699425    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:48:07.710601    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:48:07.710686    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:48:07.722973    4886 logs.go:282] 0 containers: []
	W1028 04:48:07.722986    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:48:07.723055    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:48:07.734978    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:48:07.734998    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:07.735004    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:07.774086    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:07.774105    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:07.779192    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:07.779199    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:07.799623    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:48:07.799641    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:48:07.811878    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:07.811895    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:07.852653    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:48:07.852666    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:48:07.868062    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:48:07.868078    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:48:07.882824    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:48:07.882836    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:48:07.895939    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:48:07.895951    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:48:07.908314    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:48:07.908325    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:48:07.921897    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:48:07.921914    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:07.946392    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:07.946407    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:07.958785    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:48:07.958802    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:48:07.978142    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:07.978154    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:07.990300    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:07.990313    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:05.501890    5010 out.go:235]   - Booting up control plane ...
	I1028 04:48:05.502055    5010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 04:48:05.502164    5010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 04:48:05.502208    5010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 04:48:05.502254    5010 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 04:48:05.502402    5010 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 04:48:09.505229    5010 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.003083 seconds
	I1028 04:48:09.505295    5010 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 04:48:09.508863    5010 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 04:48:10.017818    5010 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 04:48:10.017925    5010 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-714000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 04:48:10.521920    5010 kubeadm.go:310] [bootstrap-token] Using token: qsurol.sopdxvnxt7m0vkqj
	I1028 04:48:10.527469    5010 out.go:235]   - Configuring RBAC rules ...
	I1028 04:48:10.527518    5010 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 04:48:10.527560    5010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 04:48:10.529317    5010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 04:48:10.534090    5010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 04:48:10.535178    5010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 04:48:10.535957    5010 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 04:48:10.540332    5010 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 04:48:10.717989    5010 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 04:48:10.926405    5010 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 04:48:10.926828    5010 kubeadm.go:310] 
	I1028 04:48:10.926865    5010 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 04:48:10.926869    5010 kubeadm.go:310] 
	I1028 04:48:10.926913    5010 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 04:48:10.926920    5010 kubeadm.go:310] 
	I1028 04:48:10.926937    5010 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 04:48:10.926974    5010 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 04:48:10.927003    5010 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 04:48:10.927006    5010 kubeadm.go:310] 
	I1028 04:48:10.927032    5010 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 04:48:10.927034    5010 kubeadm.go:310] 
	I1028 04:48:10.927070    5010 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 04:48:10.927074    5010 kubeadm.go:310] 
	I1028 04:48:10.927103    5010 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 04:48:10.927147    5010 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 04:48:10.927188    5010 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 04:48:10.927192    5010 kubeadm.go:310] 
	I1028 04:48:10.927237    5010 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 04:48:10.927295    5010 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 04:48:10.927298    5010 kubeadm.go:310] 
	I1028 04:48:10.927346    5010 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qsurol.sopdxvnxt7m0vkqj \
	I1028 04:48:10.927413    5010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b1828748577e93ccb806e0aae973ddbc82f94e1a1a028b415724a35e8cf5acf \
	I1028 04:48:10.927423    5010 kubeadm.go:310] 	--control-plane 
	I1028 04:48:10.927426    5010 kubeadm.go:310] 
	I1028 04:48:10.927473    5010 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 04:48:10.927478    5010 kubeadm.go:310] 
	I1028 04:48:10.927528    5010 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qsurol.sopdxvnxt7m0vkqj \
	I1028 04:48:10.927587    5010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b1828748577e93ccb806e0aae973ddbc82f94e1a1a028b415724a35e8cf5acf 
	I1028 04:48:10.927700    5010 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 04:48:10.927757    5010 cni.go:84] Creating CNI manager for ""
	I1028 04:48:10.927767    5010 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:48:10.933723    5010 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 04:48:10.937728    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 04:48:10.940708    5010 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 04:48:10.946570    5010 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 04:48:10.946648    5010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 04:48:10.946659    5010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-714000 minikube.k8s.io/updated_at=2024_10_28T04_48_10_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=stopped-upgrade-714000 minikube.k8s.io/primary=true
	I1028 04:48:10.985283    5010 kubeadm.go:1113] duration metric: took 38.693167ms to wait for elevateKubeSystemPrivileges
	I1028 04:48:10.985292    5010 ops.go:34] apiserver oom_adj: -16
	I1028 04:48:10.985300    5010 kubeadm.go:394] duration metric: took 4m11.254047625s to StartCluster
	I1028 04:48:10.985310    5010 settings.go:142] acquiring lock: {Name:mkb494d4e656a3be4717ac10e07a477c00ee7ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:48:10.985408    5010 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:48:10.985857    5010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:48:10.986067    5010 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:48:10.986078    5010 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 04:48:10.986112    5010 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-714000"
	I1028 04:48:10.986120    5010 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-714000"
	W1028 04:48:10.986122    5010 addons.go:243] addon storage-provisioner should already be in state true
	I1028 04:48:10.986135    5010 host.go:66] Checking if "stopped-upgrade-714000" exists ...
	I1028 04:48:10.986139    5010 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-714000"
	I1028 04:48:10.986149    5010 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-714000"
	I1028 04:48:10.986305    5010 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:48:10.989549    5010 out.go:177] * Verifying Kubernetes components...
	I1028 04:48:10.990243    5010 kapi.go:59] client config for stopped-upgrade-714000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.key", CAFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102d96680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 04:48:10.993918    5010 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-714000"
	W1028 04:48:10.993923    5010 addons.go:243] addon default-storageclass should already be in state true
	I1028 04:48:10.993930    5010 host.go:66] Checking if "stopped-upgrade-714000" exists ...
	I1028 04:48:10.994439    5010 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 04:48:10.994444    5010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 04:48:10.994450    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:48:10.999667    5010 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:48:10.508639    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:11.002719    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:48:11.008740    5010 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 04:48:11.008749    5010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 04:48:11.008758    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:48:11.089879    5010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 04:48:11.095416    5010 api_server.go:52] waiting for apiserver process to appear ...
	I1028 04:48:11.095486    5010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:48:11.099338    5010 api_server.go:72] duration metric: took 113.260459ms to wait for apiserver process to appear ...
	I1028 04:48:11.099346    5010 api_server.go:88] waiting for apiserver healthz status ...
	I1028 04:48:11.099353    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:11.110533    5010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 04:48:11.171655    5010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 04:48:11.486864    5010 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 04:48:11.486876    5010 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 04:48:15.510905    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:15.511173    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:48:15.533097    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:48:15.533210    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:48:15.547345    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:48:15.547429    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:48:15.559716    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:48:15.559796    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:48:15.570502    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:48:15.570581    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:48:15.581262    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:48:15.581345    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:48:15.591606    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:48:15.591691    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:48:15.601787    4886 logs.go:282] 0 containers: []
	W1028 04:48:15.601799    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:48:15.601860    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:48:15.612178    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:48:15.612196    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:15.612201    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:15.624177    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:48:15.624191    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:15.648431    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:15.648438    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:15.652626    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:48:15.652634    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:48:15.671541    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:48:15.671554    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:48:15.685621    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:15.685634    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:15.697414    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:48:15.697425    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:48:15.708955    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:48:15.708968    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:48:15.724630    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:48:15.724644    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:48:15.736234    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:15.736247    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:15.772618    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:48:15.772626    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:48:15.787243    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:48:15.787253    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:48:15.801787    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:15.801798    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:15.813571    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:15.813584    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:15.849879    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:15.849892    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:18.366927    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:16.101466    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:16.101504    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:23.369017    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:23.369171    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:48:23.381240    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:48:23.381341    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:48:23.393429    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:48:23.393513    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:48:23.405640    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:48:23.405726    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:48:23.417111    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:48:23.417194    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:48:23.427427    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:48:23.427510    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:48:23.438984    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:48:23.439065    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:48:23.450759    4886 logs.go:282] 0 containers: []
	W1028 04:48:23.450773    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:48:23.450841    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:48:23.461494    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:48:23.461512    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:48:23.461518    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:48:23.476251    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:23.476268    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:23.492341    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:48:23.492356    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:48:23.506122    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:23.506133    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:23.511368    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:48:23.511376    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:48:23.525181    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:23.525197    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:23.537761    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:23.537775    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:23.549709    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:23.549725    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:23.592075    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:23.592089    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:23.633007    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:48:23.633019    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:48:23.644655    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:48:23.644666    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:48:23.661963    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:23.661973    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:23.673411    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:48:23.673424    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:23.697685    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:48:23.697696    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:48:23.709110    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:48:23.709124    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:48:21.101795    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:21.101819    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:26.229289    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:26.102159    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:26.102190    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:31.231520    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:31.231626    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:48:31.245055    4886 logs.go:282] 1 containers: [fdf0adcd0bc4]
	I1028 04:48:31.245145    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:48:31.259741    4886 logs.go:282] 1 containers: [e39b85c1b224]
	I1028 04:48:31.259814    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:48:31.276275    4886 logs.go:282] 4 containers: [0eca4679df0f 5359e344efc7 e6b675482666 3bc718a2c833]
	I1028 04:48:31.276367    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:48:31.287616    4886 logs.go:282] 1 containers: [0d31e77afb39]
	I1028 04:48:31.287684    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:48:31.298219    4886 logs.go:282] 1 containers: [a4962d6996f4]
	I1028 04:48:31.298287    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:48:31.308688    4886 logs.go:282] 1 containers: [5602839d5e60]
	I1028 04:48:31.308765    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:48:31.318683    4886 logs.go:282] 0 containers: []
	W1028 04:48:31.318693    4886 logs.go:284] No container was found matching "kindnet"
	I1028 04:48:31.318753    4886 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:48:31.333264    4886 logs.go:282] 1 containers: [d4660ff68fc4]
	I1028 04:48:31.333284    4886 logs.go:123] Gathering logs for kube-scheduler [0d31e77afb39] ...
	I1028 04:48:31.333289    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d31e77afb39"
	I1028 04:48:31.347622    4886 logs.go:123] Gathering logs for kubelet ...
	I1028 04:48:31.347633    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:48:31.384046    4886 logs.go:123] Gathering logs for dmesg ...
	I1028 04:48:31.384056    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:48:31.388713    4886 logs.go:123] Gathering logs for kube-apiserver [fdf0adcd0bc4] ...
	I1028 04:48:31.388721    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf0adcd0bc4"
	I1028 04:48:31.402890    4886 logs.go:123] Gathering logs for coredns [e6b675482666] ...
	I1028 04:48:31.402901    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b675482666"
	I1028 04:48:31.414793    4886 logs.go:123] Gathering logs for coredns [3bc718a2c833] ...
	I1028 04:48:31.414807    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc718a2c833"
	I1028 04:48:31.427099    4886 logs.go:123] Gathering logs for Docker ...
	I1028 04:48:31.427114    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:48:31.451112    4886 logs.go:123] Gathering logs for kube-proxy [a4962d6996f4] ...
	I1028 04:48:31.451119    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4962d6996f4"
	I1028 04:48:31.463238    4886 logs.go:123] Gathering logs for kube-controller-manager [5602839d5e60] ...
	I1028 04:48:31.463250    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5602839d5e60"
	I1028 04:48:31.481556    4886 logs.go:123] Gathering logs for container status ...
	I1028 04:48:31.481567    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:48:31.497990    4886 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:48:31.498006    4886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:48:31.536212    4886 logs.go:123] Gathering logs for etcd [e39b85c1b224] ...
	I1028 04:48:31.536228    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39b85c1b224"
	I1028 04:48:31.557579    4886 logs.go:123] Gathering logs for coredns [0eca4679df0f] ...
	I1028 04:48:31.557596    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eca4679df0f"
	I1028 04:48:31.568950    4886 logs.go:123] Gathering logs for coredns [5359e344efc7] ...
	I1028 04:48:31.568962    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5359e344efc7"
	I1028 04:48:31.580628    4886 logs.go:123] Gathering logs for storage-provisioner [d4660ff68fc4] ...
	I1028 04:48:31.580645    4886 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4660ff68fc4"
	I1028 04:48:34.094543    4886 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:31.102593    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:31.102622    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:39.096833    4886 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:39.102449    4886 out.go:201] 
	W1028 04:48:39.106414    4886 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1028 04:48:39.106420    4886 out.go:270] * 
	W1028 04:48:39.106902    4886 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:48:39.118373    4886 out.go:201] 
	I1028 04:48:36.103321    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:36.103350    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:41.104120    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:41.104155    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1028 04:48:41.489262    5010 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1028 04:48:41.493421    5010 out.go:177] * Enabled addons: storage-provisioner
	I1028 04:48:41.502296    5010 addons.go:510] duration metric: took 30.516102959s for enable addons: enabled=[storage-provisioner]
	I1028 04:48:46.105240    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:46.105330    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-10-28 11:39:45 UTC, ends at Mon 2024-10-28 11:48:55 UTC. --
	Oct 28 11:48:39 running-upgrade-687000 dockerd[3223]: time="2024-10-28T11:48:39.360297624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 28 11:48:39 running-upgrade-687000 dockerd[3223]: time="2024-10-28T11:48:39.360311541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 28 11:48:39 running-upgrade-687000 dockerd[3223]: time="2024-10-28T11:48:39.360316374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 28 11:48:39 running-upgrade-687000 dockerd[3223]: time="2024-10-28T11:48:39.360368456Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e14593ec59dfdbd39ad2f9a95e05cbedb10633a5ee092e17b39010ed83741178 pid=18884 runtime=io.containerd.runc.v2
	Oct 28 11:48:40 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:40Z" level=error msg="ContainerStats resp: {0x400041d680 linux}"
	Oct 28 11:48:40 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:40Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 28 11:48:41 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:41Z" level=error msg="ContainerStats resp: {0x4000805e40 linux}"
	Oct 28 11:48:41 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:41Z" level=error msg="ContainerStats resp: {0x4000412500 linux}"
	Oct 28 11:48:41 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:41Z" level=error msg="ContainerStats resp: {0x4000412700 linux}"
	Oct 28 11:48:41 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:41Z" level=error msg="ContainerStats resp: {0x4000412ac0 linux}"
	Oct 28 11:48:41 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:41Z" level=error msg="ContainerStats resp: {0x4000412c00 linux}"
	Oct 28 11:48:41 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:41Z" level=error msg="ContainerStats resp: {0x40009ce200 linux}"
	Oct 28 11:48:41 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:41Z" level=error msg="ContainerStats resp: {0x40009ce940 linux}"
	Oct 28 11:48:45 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:45Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 28 11:48:50 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:50Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 28 11:48:51 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:51Z" level=error msg="ContainerStats resp: {0x4000805440 linux}"
	Oct 28 11:48:51 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:51Z" level=error msg="ContainerStats resp: {0x4000805580 linux}"
	Oct 28 11:48:52 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:52Z" level=error msg="ContainerStats resp: {0x400090ec00 linux}"
	Oct 28 11:48:53 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:53Z" level=error msg="ContainerStats resp: {0x400090fcc0 linux}"
	Oct 28 11:48:53 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:53Z" level=error msg="ContainerStats resp: {0x40008c7dc0 linux}"
	Oct 28 11:48:53 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:53Z" level=error msg="ContainerStats resp: {0x4000358640 linux}"
	Oct 28 11:48:53 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:53Z" level=error msg="ContainerStats resp: {0x4000359500 linux}"
	Oct 28 11:48:53 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:53Z" level=error msg="ContainerStats resp: {0x400088ac80 linux}"
	Oct 28 11:48:53 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:53Z" level=error msg="ContainerStats resp: {0x400041c440 linux}"
	Oct 28 11:48:53 running-upgrade-687000 cri-dockerd[3058]: time="2024-10-28T11:48:53Z" level=error msg="ContainerStats resp: {0x400088b6c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ad77d5966d4ae       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   67ac3b68e6ce1
	e14593ec59dfd       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   6970075308d68
	0eca4679df0ff       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   67ac3b68e6ce1
	5359e344efc74       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6970075308d68
	a4962d6996f44       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   f93fd7a9c2b19
	d4660ff68fc4e       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   5ae0c0bee7978
	fdf0adcd0bc4b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   209b1cc9a48aa
	0d31e77afb39e       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   1617f8577e581
	5602839d5e60d       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   cbfe66e31745f
	e39b85c1b2241       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   87d43a7afe88e
	
	
	==> coredns [0eca4679df0f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:45685->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:45662->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:46061->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:39812->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:46078->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:48004->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:32871->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:56176->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:43795->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7373586092174741570.2170727357862491635. HINFO: read udp 10.244.0.3:33629->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5359e344efc7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:36134->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:56869->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:47719->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:41115->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:51002->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:58180->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:46507->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:37237->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:43287->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494425195848232770.7275708117974491797. HINFO: read udp 10.244.0.2:44402->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ad77d5966d4a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6120348050692983809.4421530579939874285. HINFO: read udp 10.244.0.3:37333->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6120348050692983809.4421530579939874285. HINFO: read udp 10.244.0.3:40943->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6120348050692983809.4421530579939874285. HINFO: read udp 10.244.0.3:51225->10.0.2.3:53: i/o timeout
	
	
	==> coredns [e14593ec59df] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4886967380332192128.7184915790611645216. HINFO: read udp 10.244.0.2:56873->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4886967380332192128.7184915790611645216. HINFO: read udp 10.244.0.2:33049->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4886967380332192128.7184915790611645216. HINFO: read udp 10.244.0.2:45521->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-687000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-687000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=running-upgrade-687000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T04_44_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:44:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-687000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:48:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:44:37 +0000   Mon, 28 Oct 2024 11:44:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:44:37 +0000   Mon, 28 Oct 2024 11:44:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:44:37 +0000   Mon, 28 Oct 2024 11:44:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:44:37 +0000   Mon, 28 Oct 2024 11:44:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-687000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe94295191c84f358533818be09715e1
	  System UUID:                fe94295191c84f358533818be09715e1
	  Boot ID:                    caf43f6d-7d86-4268-bc6b-ccd480047ca2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-gn7t7                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-jvd9l                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-687000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-687000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-687000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-grdjb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-687000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-687000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-687000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-687000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-687000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-687000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-687000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-687000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-687000 event: Registered Node running-upgrade-687000 in Controller
	
	
	==> dmesg <==
	[  +1.676526] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.066292] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.062254] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.136105] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.067476] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.058198] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[Oct28 11:40] systemd-fstab-generator[1292]: Ignoring "noauto" for root device
	[ +10.653263] systemd-fstab-generator[1983]: Ignoring "noauto" for root device
	[  +2.821187] systemd-fstab-generator[2264]: Ignoring "noauto" for root device
	[  +0.136297] systemd-fstab-generator[2301]: Ignoring "noauto" for root device
	[  +0.087934] systemd-fstab-generator[2312]: Ignoring "noauto" for root device
	[  +0.076252] systemd-fstab-generator[2325]: Ignoring "noauto" for root device
	[  +3.376627] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.173381] systemd-fstab-generator[3015]: Ignoring "noauto" for root device
	[  +0.066979] systemd-fstab-generator[3026]: Ignoring "noauto" for root device
	[  +0.066775] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +0.081303] systemd-fstab-generator[3051]: Ignoring "noauto" for root device
	[  +2.341420] systemd-fstab-generator[3202]: Ignoring "noauto" for root device
	[  +3.487161] systemd-fstab-generator[3610]: Ignoring "noauto" for root device
	[  +1.167457] systemd-fstab-generator[3905]: Ignoring "noauto" for root device
	[ +18.922045] kauditd_printk_skb: 68 callbacks suppressed
	[Oct28 11:44] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.417952] systemd-fstab-generator[11914]: Ignoring "noauto" for root device
	[  +5.619261] systemd-fstab-generator[12528]: Ignoring "noauto" for root device
	[  +0.482245] systemd-fstab-generator[12665]: Ignoring "noauto" for root device
	
	
	==> etcd [e39b85c1b224] <==
	{"level":"info","ts":"2024-10-28T11:44:33.090Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T11:44:33.090Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T11:44:33.090Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T11:44:33.090Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-28T11:44:33.090Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-28T11:44:33.090Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-28T11:44:33.090Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-28T11:44:34.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T11:44:34.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T11:44:34.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-28T11:44:34.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T11:44:34.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-28T11:44:34.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-28T11:44:34.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-28T11:44:34.085Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:44:34.086Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:44:34.086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:44:34.086Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:44:34.086Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-687000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T11:44:34.086Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:44:34.086Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:44:34.088Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-28T11:44:34.088Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T11:44:34.088Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T11:44:34.091Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:48:55 up 9 min,  0 users,  load average: 0.05, 0.15, 0.09
	Linux running-upgrade-687000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fdf0adcd0bc4] <==
	I1028 11:44:35.304843       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1028 11:44:35.334097       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1028 11:44:35.334286       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1028 11:44:35.334517       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1028 11:44:35.334574       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 11:44:35.334834       1 cache.go:39] Caches are synced for autoregister controller
	I1028 11:44:35.362082       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1028 11:44:36.062758       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1028 11:44:36.236363       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1028 11:44:36.240545       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1028 11:44:36.240569       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 11:44:36.379652       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 11:44:36.390099       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 11:44:36.403980       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1028 11:44:36.405915       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1028 11:44:36.406228       1 controller.go:611] quota admission added evaluator for: endpoints
	I1028 11:44:36.407425       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 11:44:37.367357       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1028 11:44:37.764295       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1028 11:44:37.768100       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1028 11:44:37.808707       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1028 11:44:37.810064       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 11:44:51.028472       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1028 11:44:51.127488       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:44:52.326392       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [5602839d5e60] <==
	I1028 11:44:50.389880       1 shared_informer.go:262] Caches are synced for job
	I1028 11:44:50.394242       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1028 11:44:50.394365       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1028 11:44:50.395484       1 shared_informer.go:262] Caches are synced for deployment
	I1028 11:44:50.400338       1 shared_informer.go:262] Caches are synced for ephemeral
	I1028 11:44:50.416966       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1028 11:44:50.427091       1 shared_informer.go:262] Caches are synced for endpoint
	I1028 11:44:50.427145       1 shared_informer.go:262] Caches are synced for persistent volume
	I1028 11:44:50.427179       1 shared_informer.go:262] Caches are synced for expand
	I1028 11:44:50.442143       1 shared_informer.go:262] Caches are synced for resource quota
	I1028 11:44:50.451030       1 shared_informer.go:262] Caches are synced for resource quota
	I1028 11:44:50.468399       1 shared_informer.go:262] Caches are synced for HPA
	I1028 11:44:50.468436       1 shared_informer.go:262] Caches are synced for disruption
	I1028 11:44:50.468446       1 disruption.go:371] Sending events to api server.
	I1028 11:44:50.474687       1 shared_informer.go:262] Caches are synced for attach detach
	I1028 11:44:50.477972       1 shared_informer.go:262] Caches are synced for daemon sets
	I1028 11:44:50.481086       1 shared_informer.go:262] Caches are synced for stateful set
	I1028 11:44:50.483245       1 shared_informer.go:262] Caches are synced for PVC protection
	I1028 11:44:50.850365       1 shared_informer.go:262] Caches are synced for garbage collector
	I1028 11:44:50.876927       1 shared_informer.go:262] Caches are synced for garbage collector
	I1028 11:44:50.876939       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1028 11:44:51.029634       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1028 11:44:51.134103       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-grdjb"
	I1028 11:44:51.229321       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jvd9l"
	I1028 11:44:51.233295       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-gn7t7"
	
	
	==> kube-proxy [a4962d6996f4] <==
	I1028 11:44:52.278757       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1028 11:44:52.278871       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1028 11:44:52.278960       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1028 11:44:52.324359       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1028 11:44:52.324371       1 server_others.go:206] "Using iptables Proxier"
	I1028 11:44:52.324399       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1028 11:44:52.324519       1 server.go:661] "Version info" version="v1.24.1"
	I1028 11:44:52.324525       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:44:52.324985       1 config.go:317] "Starting service config controller"
	I1028 11:44:52.325102       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1028 11:44:52.325128       1 config.go:226] "Starting endpoint slice config controller"
	I1028 11:44:52.325169       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1028 11:44:52.325286       1 config.go:444] "Starting node config controller"
	I1028 11:44:52.325324       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1028 11:44:52.425601       1 shared_informer.go:262] Caches are synced for node config
	I1028 11:44:52.425611       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1028 11:44:52.425601       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [0d31e77afb39] <==
	W1028 11:44:35.297539       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 11:44:35.297827       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1028 11:44:35.295569       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1028 11:44:35.297334       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:44:35.297861       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1028 11:44:35.297356       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 11:44:35.297884       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1028 11:44:35.297396       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:44:35.297923       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1028 11:44:35.297448       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:44:35.297954       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1028 11:44:35.297556       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:44:35.297976       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1028 11:44:35.297575       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 11:44:35.297994       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1028 11:44:35.297623       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 11:44:35.298015       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 11:44:35.297627       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 11:44:35.298035       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1028 11:44:35.295519       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 11:44:35.297856       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:44:35.298071       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1028 11:44:36.133769       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:44:36.133871       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1028 11:44:36.686722       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-10-28 11:39:45 UTC, ends at Mon 2024-10-28 11:48:55 UTC. --
	Oct 28 11:44:39 running-upgrade-687000 kubelet[12534]: E1028 11:44:39.593627   12534 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-687000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-687000"
	Oct 28 11:44:39 running-upgrade-687000 kubelet[12534]: E1028 11:44:39.794666   12534 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-687000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-687000"
	Oct 28 11:44:39 running-upgrade-687000 kubelet[12534]: I1028 11:44:39.855563   12534 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/5971e690-0fde-4b21-8bc8-1450fc0a89a7/volumes"
	Oct 28 11:44:39 running-upgrade-687000 kubelet[12534]: I1028 11:44:39.855598   12534 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/6f146005-8bc7-4e77-b15c-9c859e6b5367/volumes"
	Oct 28 11:44:39 running-upgrade-687000 kubelet[12534]: I1028 11:44:39.855611   12534 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/b494f173-d998-48d9-a2b1-d82290a821a1/volumes"
	Oct 28 11:44:39 running-upgrade-687000 kubelet[12534]: I1028 11:44:39.990640   12534 request.go:601] Waited for 1.119113284s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Oct 28 11:44:39 running-upgrade-687000 kubelet[12534]: E1028 11:44:39.994090   12534 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-687000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-687000"
	Oct 28 11:44:50 running-upgrade-687000 kubelet[12534]: I1028 11:44:50.216691   12534 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 11:44:50 running-upgrade-687000 kubelet[12534]: I1028 11:44:50.217063   12534 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 11:44:50 running-upgrade-687000 kubelet[12534]: I1028 11:44:50.399434   12534 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 11:44:50 running-upgrade-687000 kubelet[12534]: I1028 11:44:50.518038   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4fb4ad73-3fb0-450d-8ab0-44b0e27ed9a5-tmp\") pod \"storage-provisioner\" (UID: \"4fb4ad73-3fb0-450d-8ab0-44b0e27ed9a5\") " pod="kube-system/storage-provisioner"
	Oct 28 11:44:50 running-upgrade-687000 kubelet[12534]: I1028 11:44:50.518073   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb6ng\" (UniqueName: \"kubernetes.io/projected/4fb4ad73-3fb0-450d-8ab0-44b0e27ed9a5-kube-api-access-gb6ng\") pod \"storage-provisioner\" (UID: \"4fb4ad73-3fb0-450d-8ab0-44b0e27ed9a5\") " pod="kube-system/storage-provisioner"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.137156   12534 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.231523   12534 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.237801   12534 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.325136   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/122e6088-c433-4856-84c9-5851e1e70fe3-xtables-lock\") pod \"kube-proxy-grdjb\" (UID: \"122e6088-c433-4856-84c9-5851e1e70fe3\") " pod="kube-system/kube-proxy-grdjb"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.325169   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/122e6088-c433-4856-84c9-5851e1e70fe3-lib-modules\") pod \"kube-proxy-grdjb\" (UID: \"122e6088-c433-4856-84c9-5851e1e70fe3\") " pod="kube-system/kube-proxy-grdjb"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.325180   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/122e6088-c433-4856-84c9-5851e1e70fe3-kube-proxy\") pod \"kube-proxy-grdjb\" (UID: \"122e6088-c433-4856-84c9-5851e1e70fe3\") " pod="kube-system/kube-proxy-grdjb"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.325192   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89zn\" (UniqueName: \"kubernetes.io/projected/122e6088-c433-4856-84c9-5851e1e70fe3-kube-api-access-g89zn\") pod \"kube-proxy-grdjb\" (UID: \"122e6088-c433-4856-84c9-5851e1e70fe3\") " pod="kube-system/kube-proxy-grdjb"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.429550   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p964n\" (UniqueName: \"kubernetes.io/projected/20704afb-4958-49c8-8940-fb2b86e2d037-kube-api-access-p964n\") pod \"coredns-6d4b75cb6d-gn7t7\" (UID: \"20704afb-4958-49c8-8940-fb2b86e2d037\") " pod="kube-system/coredns-6d4b75cb6d-gn7t7"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.429590   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9199d72-ce11-4e37-9ee9-ed880e5fbac6-config-volume\") pod \"coredns-6d4b75cb6d-jvd9l\" (UID: \"b9199d72-ce11-4e37-9ee9-ed880e5fbac6\") " pod="kube-system/coredns-6d4b75cb6d-jvd9l"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.429603   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20704afb-4958-49c8-8940-fb2b86e2d037-config-volume\") pod \"coredns-6d4b75cb6d-gn7t7\" (UID: \"20704afb-4958-49c8-8940-fb2b86e2d037\") " pod="kube-system/coredns-6d4b75cb6d-gn7t7"
	Oct 28 11:44:51 running-upgrade-687000 kubelet[12534]: I1028 11:44:51.429627   12534 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg5fr\" (UniqueName: \"kubernetes.io/projected/b9199d72-ce11-4e37-9ee9-ed880e5fbac6-kube-api-access-sg5fr\") pod \"coredns-6d4b75cb6d-jvd9l\" (UID: \"b9199d72-ce11-4e37-9ee9-ed880e5fbac6\") " pod="kube-system/coredns-6d4b75cb6d-jvd9l"
	Oct 28 11:48:40 running-upgrade-687000 kubelet[12534]: I1028 11:48:40.181002   12534 scope.go:110] "RemoveContainer" containerID="e6b675482666b887ae496e81c9a15b600a41df59608013adf0171e5140d19c9c"
	Oct 28 11:48:40 running-upgrade-687000 kubelet[12534]: I1028 11:48:40.201551   12534 scope.go:110] "RemoveContainer" containerID="3bc718a2c833cef3402aef43758349a8e453d81a3de87a9e1772c1b7037cd5d8"
	
	
	==> storage-provisioner [d4660ff68fc4] <==
	I1028 11:44:50.924834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:44:50.930793       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:44:50.930815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:44:50.933840       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:44:50.933914       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-687000_7694224d-1be3-4c47-accf-0240d1aeaa66!
	I1028 11:44:50.934192       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce50d74c-6379-4909-afe6-2dab151ad476", APIVersion:"v1", ResourceVersion:"332", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-687000_7694224d-1be3-4c47-accf-0240d1aeaa66 became leader
	I1028 11:44:51.034389       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-687000_7694224d-1be3-4c47-accf-0240d1aeaa66!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-687000 -n running-upgrade-687000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-687000 -n running-upgrade-687000: exit status 2 (15.748564917s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-687000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-687000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-687000
--- FAIL: TestRunningBinaryUpgrade (593.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-628000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-628000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.029986917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-628000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-628000" primary control-plane node in "kubernetes-upgrade-628000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-628000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:42:17.903882    4944 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:42:17.904060    4944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:42:17.904063    4944 out.go:358] Setting ErrFile to fd 2...
	I1028 04:42:17.904065    4944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:42:17.904230    4944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:42:17.905413    4944 out.go:352] Setting JSON to false
	I1028 04:42:17.923478    4944 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4308,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:42:17.923555    4944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:42:17.929649    4944 out.go:177] * [kubernetes-upgrade-628000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:42:17.937566    4944 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:42:17.937589    4944 notify.go:220] Checking for updates...
	I1028 04:42:17.943553    4944 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:42:17.946554    4944 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:42:17.950577    4944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:42:17.953537    4944 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:42:17.956604    4944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:42:17.959896    4944 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:42:17.959964    4944 config.go:182] Loaded profile config "running-upgrade-687000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:42:17.960014    4944 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:42:17.963539    4944 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:42:17.970612    4944 start.go:297] selected driver: qemu2
	I1028 04:42:17.970619    4944 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:42:17.970626    4944 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:42:17.973091    4944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:42:17.974442    4944 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:42:17.977704    4944 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 04:42:17.977723    4944 cni.go:84] Creating CNI manager for ""
	I1028 04:42:17.977744    4944 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 04:42:17.977770    4944 start.go:340] cluster config:
	{Name:kubernetes-upgrade-628000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:42:17.982028    4944 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:42:17.990563    4944 out.go:177] * Starting "kubernetes-upgrade-628000" primary control-plane node in "kubernetes-upgrade-628000" cluster
	I1028 04:42:17.994598    4944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:42:17.994615    4944 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 04:42:17.994626    4944 cache.go:56] Caching tarball of preloaded images
	I1028 04:42:17.994706    4944 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:42:17.994712    4944 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 04:42:17.994763    4944 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/kubernetes-upgrade-628000/config.json ...
	I1028 04:42:17.994779    4944 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/kubernetes-upgrade-628000/config.json: {Name:mkf07004ede022c4eac7f180cac9c2877457b003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:42:17.995078    4944 start.go:360] acquireMachinesLock for kubernetes-upgrade-628000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:42:17.995119    4944 start.go:364] duration metric: took 36.166µs to acquireMachinesLock for "kubernetes-upgrade-628000"
	I1028 04:42:17.995130    4944 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-628000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:42:17.995151    4944 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:42:18.002545    4944 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:42:18.025922    4944 start.go:159] libmachine.API.Create for "kubernetes-upgrade-628000" (driver="qemu2")
	I1028 04:42:18.025955    4944 client.go:168] LocalClient.Create starting
	I1028 04:42:18.026032    4944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:42:18.026071    4944 main.go:141] libmachine: Decoding PEM data...
	I1028 04:42:18.026085    4944 main.go:141] libmachine: Parsing certificate...
	I1028 04:42:18.026121    4944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:42:18.026151    4944 main.go:141] libmachine: Decoding PEM data...
	I1028 04:42:18.026159    4944 main.go:141] libmachine: Parsing certificate...
	I1028 04:42:18.026522    4944 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:42:18.261093    4944 main.go:141] libmachine: Creating SSH key...
	I1028 04:42:18.381468    4944 main.go:141] libmachine: Creating Disk image...
	I1028 04:42:18.381476    4944 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:42:18.381679    4944 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2
	I1028 04:42:18.392533    4944 main.go:141] libmachine: STDOUT: 
	I1028 04:42:18.392552    4944 main.go:141] libmachine: STDERR: 
	I1028 04:42:18.392605    4944 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2 +20000M
	I1028 04:42:18.401391    4944 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:42:18.401407    4944 main.go:141] libmachine: STDERR: 
	I1028 04:42:18.401418    4944 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2
	I1028 04:42:18.401426    4944 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:42:18.401439    4944 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:42:18.401472    4944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:4d:a4:ed:ac:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2
	I1028 04:42:18.403266    4944 main.go:141] libmachine: STDOUT: 
	I1028 04:42:18.403283    4944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:42:18.403304    4944 client.go:171] duration metric: took 377.340875ms to LocalClient.Create
	I1028 04:42:20.405402    4944 start.go:128] duration metric: took 2.41023625s to createHost
	I1028 04:42:20.405448    4944 start.go:83] releasing machines lock for "kubernetes-upgrade-628000", held for 2.410319875s
	W1028 04:42:20.405484    4944 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:42:20.409715    4944 out.go:177] * Deleting "kubernetes-upgrade-628000" in qemu2 ...
	W1028 04:42:20.435424    4944 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:42:20.435434    4944 start.go:729] Will try again in 5 seconds ...
	I1028 04:42:25.437751    4944 start.go:360] acquireMachinesLock for kubernetes-upgrade-628000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:42:25.438456    4944 start.go:364] duration metric: took 589.041µs to acquireMachinesLock for "kubernetes-upgrade-628000"
	I1028 04:42:25.438607    4944 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-628000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:42:25.438928    4944 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:42:25.445495    4944 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:42:25.488665    4944 start.go:159] libmachine.API.Create for "kubernetes-upgrade-628000" (driver="qemu2")
	I1028 04:42:25.488753    4944 client.go:168] LocalClient.Create starting
	I1028 04:42:25.488960    4944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:42:25.489053    4944 main.go:141] libmachine: Decoding PEM data...
	I1028 04:42:25.489069    4944 main.go:141] libmachine: Parsing certificate...
	I1028 04:42:25.489147    4944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:42:25.489210    4944 main.go:141] libmachine: Decoding PEM data...
	I1028 04:42:25.489222    4944 main.go:141] libmachine: Parsing certificate...
	I1028 04:42:25.490025    4944 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:42:25.667423    4944 main.go:141] libmachine: Creating SSH key...
	I1028 04:42:25.831482    4944 main.go:141] libmachine: Creating Disk image...
	I1028 04:42:25.831490    4944 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:42:25.831745    4944 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2
	I1028 04:42:25.842233    4944 main.go:141] libmachine: STDOUT: 
	I1028 04:42:25.842254    4944 main.go:141] libmachine: STDERR: 
	I1028 04:42:25.842315    4944 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2 +20000M
	I1028 04:42:25.851017    4944 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:42:25.851034    4944 main.go:141] libmachine: STDERR: 
	I1028 04:42:25.851046    4944 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2
	I1028 04:42:25.851052    4944 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:42:25.851074    4944 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:42:25.851102    4944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:7f:eb:5e:9c:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2
	I1028 04:42:25.852951    4944 main.go:141] libmachine: STDOUT: 
	I1028 04:42:25.852965    4944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:42:25.852977    4944 client.go:171] duration metric: took 364.204708ms to LocalClient.Create
	I1028 04:42:27.855203    4944 start.go:128] duration metric: took 2.416227334s to createHost
	I1028 04:42:27.855310    4944 start.go:83] releasing machines lock for "kubernetes-upgrade-628000", held for 2.416820708s
	W1028 04:42:27.855715    4944 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-628000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-628000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:42:27.868310    4944 out.go:201] 
	W1028 04:42:27.872470    4944 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:42:27.872496    4944 out.go:270] * 
	* 
	W1028 04:42:27.874906    4944 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:42:27.886341    4944 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-628000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-628000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-628000: (2.006666208s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-628000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-628000 status --format={{.Host}}: exit status 7 (65.850167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-628000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-628000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.191427917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-628000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-628000" primary control-plane node in "kubernetes-upgrade-628000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-628000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-628000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:42:30.009955    4977 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:42:30.010117    4977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:42:30.010120    4977 out.go:358] Setting ErrFile to fd 2...
	I1028 04:42:30.010122    4977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:42:30.010270    4977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:42:30.011362    4977 out.go:352] Setting JSON to false
	I1028 04:42:30.029626    4977 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4321,"bootTime":1730111429,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:42:30.029707    4977 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:42:30.033674    4977 out.go:177] * [kubernetes-upgrade-628000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:42:30.040652    4977 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:42:30.040691    4977 notify.go:220] Checking for updates...
	I1028 04:42:30.048609    4977 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:42:30.052650    4977 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:42:30.055619    4977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:42:30.058637    4977 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:42:30.061657    4977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:42:30.064819    4977 config.go:182] Loaded profile config "kubernetes-upgrade-628000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 04:42:30.065080    4977 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:42:30.068654    4977 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:42:30.075627    4977 start.go:297] selected driver: qemu2
	I1028 04:42:30.075635    4977 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-628000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:42:30.075699    4977 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:42:30.078152    4977 cni.go:84] Creating CNI manager for ""
	I1028 04:42:30.078188    4977 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:42:30.078209    4977 start.go:340] cluster config:
	{Name:kubernetes-upgrade-628000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-628000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:42:30.082425    4977 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:42:30.089563    4977 out.go:177] * Starting "kubernetes-upgrade-628000" primary control-plane node in "kubernetes-upgrade-628000" cluster
	I1028 04:42:30.093598    4977 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:42:30.093613    4977 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:42:30.093620    4977 cache.go:56] Caching tarball of preloaded images
	I1028 04:42:30.093685    4977 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:42:30.093690    4977 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:42:30.093729    4977 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/kubernetes-upgrade-628000/config.json ...
	I1028 04:42:30.094094    4977 start.go:360] acquireMachinesLock for kubernetes-upgrade-628000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:42:30.094122    4977 start.go:364] duration metric: took 23.125µs to acquireMachinesLock for "kubernetes-upgrade-628000"
	I1028 04:42:30.094131    4977 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:42:30.094134    4977 fix.go:54] fixHost starting: 
	I1028 04:42:30.094243    4977 fix.go:112] recreateIfNeeded on kubernetes-upgrade-628000: state=Stopped err=<nil>
	W1028 04:42:30.094252    4977 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:42:30.102614    4977 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-628000" ...
	I1028 04:42:30.106650    4977 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:42:30.106690    4977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:7f:eb:5e:9c:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2
	I1028 04:42:30.108931    4977 main.go:141] libmachine: STDOUT: 
	I1028 04:42:30.108951    4977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:42:30.108978    4977 fix.go:56] duration metric: took 14.841959ms for fixHost
	I1028 04:42:30.108982    4977 start.go:83] releasing machines lock for "kubernetes-upgrade-628000", held for 14.855792ms
	W1028 04:42:30.108988    4977 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:42:30.109038    4977 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:42:30.109042    4977 start.go:729] Will try again in 5 seconds ...
	I1028 04:42:35.111367    4977 start.go:360] acquireMachinesLock for kubernetes-upgrade-628000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:42:35.111831    4977 start.go:364] duration metric: took 367.833µs to acquireMachinesLock for "kubernetes-upgrade-628000"
	I1028 04:42:35.111940    4977 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:42:35.111953    4977 fix.go:54] fixHost starting: 
	I1028 04:42:35.112444    4977 fix.go:112] recreateIfNeeded on kubernetes-upgrade-628000: state=Stopped err=<nil>
	W1028 04:42:35.112463    4977 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:42:35.117875    4977 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-628000" ...
	I1028 04:42:35.124804    4977 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:42:35.125045    4977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:7f:eb:5e:9c:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubernetes-upgrade-628000/disk.qcow2
	I1028 04:42:35.133557    4977 main.go:141] libmachine: STDOUT: 
	I1028 04:42:35.133630    4977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:42:35.133704    4977 fix.go:56] duration metric: took 21.750125ms for fixHost
	I1028 04:42:35.133904    4977 start.go:83] releasing machines lock for "kubernetes-upgrade-628000", held for 22.054542ms
	W1028 04:42:35.134108    4977 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-628000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-628000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:42:35.141825    4977 out.go:201] 
	W1028 04:42:35.144857    4977 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:42:35.144908    4977 out.go:270] * 
	* 
	W1028 04:42:35.146461    4977 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:42:35.155858    4977 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-628000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-628000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-628000 version --output=json: exit status 1 (58.301792ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-628000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-28 04:42:35.228025 -0700 PDT m=+3764.015342042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-628000 -n kubernetes-upgrade-628000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-628000 -n kubernetes-upgrade-628000: exit status 7 (36.645125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-628000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-628000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-628000
--- FAIL: TestKubernetesUpgrade (17.47s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.15s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19876
- KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1651642641/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.15s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.91s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19876
- KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2357123785/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1744872595 start -p stopped-upgrade-714000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1744872595 start -p stopped-upgrade-714000 --memory=2200 --vm-driver=qemu2 : (40.602322208s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1744872595 -p stopped-upgrade-714000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1744872595 -p stopped-upgrade-714000 stop: (12.118309083s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-714000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1028 04:43:42.909213    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:44:49.382397    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
E1028 04:45:06.286754    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-714000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.874411333s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-714000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-714000" primary control-plane node in "stopped-upgrade-714000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-714000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:43:30.235542    5010 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:43:30.235738    5010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:43:30.235742    5010 out.go:358] Setting ErrFile to fd 2...
	I1028 04:43:30.235745    5010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:43:30.235910    5010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:43:30.237297    5010 out.go:352] Setting JSON to false
	I1028 04:43:30.258072    5010 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4381,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:43:30.258160    5010 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:43:30.263508    5010 out.go:177] * [stopped-upgrade-714000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:43:30.270411    5010 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:43:30.270447    5010 notify.go:220] Checking for updates...
	I1028 04:43:30.279422    5010 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:43:30.282414    5010 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:43:30.285410    5010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:43:30.292411    5010 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:43:30.296421    5010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:43:30.299869    5010 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:43:30.304413    5010 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 04:43:30.307466    5010 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:43:30.311398    5010 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:43:30.318437    5010 start.go:297] selected driver: qemu2
	I1028 04:43:30.318443    5010 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:43:30.318483    5010 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:43:30.321203    5010 cni.go:84] Creating CNI manager for ""
	I1028 04:43:30.321235    5010 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:43:30.321264    5010 start.go:340] cluster config:
	{Name:stopped-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:43:30.321317    5010 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:43:30.329451    5010 out.go:177] * Starting "stopped-upgrade-714000" primary control-plane node in "stopped-upgrade-714000" cluster
	I1028 04:43:30.332389    5010 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 04:43:30.332401    5010 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1028 04:43:30.332407    5010 cache.go:56] Caching tarball of preloaded images
	I1028 04:43:30.332456    5010 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:43:30.332462    5010 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1028 04:43:30.332505    5010 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/config.json ...
	I1028 04:43:30.332873    5010 start.go:360] acquireMachinesLock for stopped-upgrade-714000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:43:30.332908    5010 start.go:364] duration metric: took 28.417µs to acquireMachinesLock for "stopped-upgrade-714000"
	I1028 04:43:30.332917    5010 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:43:30.332922    5010 fix.go:54] fixHost starting: 
	I1028 04:43:30.333028    5010 fix.go:112] recreateIfNeeded on stopped-upgrade-714000: state=Stopped err=<nil>
	W1028 04:43:30.333036    5010 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:43:30.341464    5010 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-714000" ...
	I1028 04:43:30.345419    5010 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:43:30.345487    5010 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/qemu.pid -nic user,model=virtio,hostfwd=tcp::57238-:22,hostfwd=tcp::57239-:2376,hostname=stopped-upgrade-714000 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/disk.qcow2
	I1028 04:43:30.392690    5010 main.go:141] libmachine: STDOUT: 
	I1028 04:43:30.392719    5010 main.go:141] libmachine: STDERR: 
	I1028 04:43:30.392727    5010 main.go:141] libmachine: Waiting for VM to start (ssh -p 57238 docker@127.0.0.1)...
	I1028 04:43:50.865436    5010 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/config.json ...
	I1028 04:43:50.865698    5010 machine.go:93] provisionDockerMachine start ...
	I1028 04:43:50.865763    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:50.865917    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:50.865922    5010 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 04:43:50.935946    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 04:43:50.935965    5010 buildroot.go:166] provisioning hostname "stopped-upgrade-714000"
	I1028 04:43:50.936046    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:50.936171    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:50.936180    5010 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-714000 && echo "stopped-upgrade-714000" | sudo tee /etc/hostname
	I1028 04:43:51.008474    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-714000
	
	I1028 04:43:51.008556    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.008672    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.008681    5010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-714000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-714000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-714000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 04:43:51.076482    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 04:43:51.076496    5010 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19876-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19876-1087/.minikube}
	I1028 04:43:51.076505    5010 buildroot.go:174] setting up certificates
	I1028 04:43:51.076509    5010 provision.go:84] configureAuth start
	I1028 04:43:51.076520    5010 provision.go:143] copyHostCerts
	I1028 04:43:51.076604    5010 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem, removing ...
	I1028 04:43:51.076612    5010 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem
	I1028 04:43:51.076702    5010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.pem (1078 bytes)
	I1028 04:43:51.076889    5010 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem, removing ...
	I1028 04:43:51.076894    5010 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem
	I1028 04:43:51.076935    5010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/cert.pem (1123 bytes)
	I1028 04:43:51.077045    5010 exec_runner.go:144] found /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem, removing ...
	I1028 04:43:51.077049    5010 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem
	I1028 04:43:51.077088    5010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19876-1087/.minikube/key.pem (1679 bytes)
	I1028 04:43:51.077184    5010 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-714000 san=[127.0.0.1 localhost minikube stopped-upgrade-714000]
	I1028 04:43:51.111364    5010 provision.go:177] copyRemoteCerts
	I1028 04:43:51.111421    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 04:43:51.111430    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:43:51.148312    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 04:43:51.155757    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 04:43:51.163055    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 04:43:51.170544    5010 provision.go:87] duration metric: took 94.01925ms to configureAuth
	I1028 04:43:51.170560    5010 buildroot.go:189] setting minikube options for container-runtime
	I1028 04:43:51.170706    5010 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:43:51.170799    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.170897    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.170903    5010 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 04:43:51.237070    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 04:43:51.237079    5010 buildroot.go:70] root file system type: tmpfs
	I1028 04:43:51.237135    5010 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 04:43:51.237200    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.237324    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.237357    5010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 04:43:51.304908    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 04:43:51.304969    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.305065    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.305072    5010 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 04:43:51.673807    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 04:43:51.673821    5010 machine.go:96] duration metric: took 808.113625ms to provisionDockerMachine
	I1028 04:43:51.673829    5010 start.go:293] postStartSetup for "stopped-upgrade-714000" (driver="qemu2")
	I1028 04:43:51.673835    5010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 04:43:51.673912    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 04:43:51.673923    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:43:51.710754    5010 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 04:43:51.711928    5010 info.go:137] Remote host: Buildroot 2021.02.12
	I1028 04:43:51.711940    5010 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/addons for local assets ...
	I1028 04:43:51.712018    5010 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19876-1087/.minikube/files for local assets ...
	I1028 04:43:51.712117    5010 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem -> 15982.pem in /etc/ssl/certs
	I1028 04:43:51.712222    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 04:43:51.715131    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /etc/ssl/certs/15982.pem (1708 bytes)
	I1028 04:43:51.722285    5010 start.go:296] duration metric: took 48.451083ms for postStartSetup
	I1028 04:43:51.722300    5010 fix.go:56] duration metric: took 21.389298167s for fixHost
	I1028 04:43:51.722341    5010 main.go:141] libmachine: Using SSH client type: native
	I1028 04:43:51.722453    5010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10133a5f0] 0x10133ce30 <nil>  [] 0s} localhost 57238 <nil> <nil>}
	I1028 04:43:51.722458    5010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 04:43:51.788709    5010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730115831.564287838
	
	I1028 04:43:51.788738    5010 fix.go:216] guest clock: 1730115831.564287838
	I1028 04:43:51.788751    5010 fix.go:229] Guest: 2024-10-28 04:43:51.564287838 -0700 PDT Remote: 2024-10-28 04:43:51.722301 -0700 PDT m=+21.519381918 (delta=-158.013162ms)
	I1028 04:43:51.788761    5010 fix.go:200] guest clock delta is within tolerance: -158.013162ms
	I1028 04:43:51.788764    5010 start.go:83] releasing machines lock for "stopped-upgrade-714000", held for 21.455770375s
	I1028 04:43:51.788837    5010 ssh_runner.go:195] Run: cat /version.json
	I1028 04:43:51.788847    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:43:51.788837    5010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 04:43:51.788875    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	W1028 04:43:51.789349    5010 sshutil.go:64] dial failure (will retry): dial tcp [::1]:57238: connect: connection refused
	I1028 04:43:51.789368    5010 retry.go:31] will retry after 218.453401ms: dial tcp [::1]:57238: connect: connection refused
	W1028 04:43:52.044384    5010 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1028 04:43:52.044438    5010 ssh_runner.go:195] Run: systemctl --version
	I1028 04:43:52.046392    5010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 04:43:52.048060    5010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 04:43:52.048099    5010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1028 04:43:52.051177    5010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1028 04:43:52.056051    5010 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 04:43:52.056060    5010 start.go:495] detecting cgroup driver to use...
	I1028 04:43:52.056137    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 04:43:52.063220    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1028 04:43:52.066629    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 04:43:52.069567    5010 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 04:43:52.069598    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 04:43:52.072426    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 04:43:52.075696    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 04:43:52.078976    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 04:43:52.082313    5010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 04:43:52.085134    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 04:43:52.088115    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 04:43:52.091408    5010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 04:43:52.094861    5010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 04:43:52.097539    5010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 04:43:52.100201    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:52.178468    5010 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 04:43:52.189160    5010 start.go:495] detecting cgroup driver to use...
	I1028 04:43:52.189250    5010 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 04:43:52.195750    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 04:43:52.200633    5010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 04:43:52.211204    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 04:43:52.216596    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 04:43:52.221617    5010 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 04:43:52.283243    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 04:43:52.288481    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 04:43:52.293696    5010 ssh_runner.go:195] Run: which cri-dockerd
	I1028 04:43:52.294931    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 04:43:52.297614    5010 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1028 04:43:52.302609    5010 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 04:43:52.382354    5010 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 04:43:52.476000    5010 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 04:43:52.476061    5010 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 04:43:52.481643    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:52.559434    5010 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 04:43:53.721176    5010 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161720041s)
	I1028 04:43:53.721250    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 04:43:53.726074    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 04:43:53.730560    5010 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 04:43:53.806730    5010 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 04:43:53.880318    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:53.957021    5010 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 04:43:53.962746    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 04:43:53.967206    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:54.053208    5010 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 04:43:54.092580    5010 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 04:43:54.092683    5010 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 04:43:54.095700    5010 start.go:563] Will wait 60s for crictl version
	I1028 04:43:54.095764    5010 ssh_runner.go:195] Run: which crictl
	I1028 04:43:54.097219    5010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 04:43:54.112558    5010 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1028 04:43:54.112635    5010 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 04:43:54.129692    5010 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 04:43:54.148203    5010 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1028 04:43:54.148356    5010 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1028 04:43:54.149608    5010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 04:43:54.152956    5010 kubeadm.go:883] updating cluster {Name:stopped-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1028 04:43:54.153001    5010 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 04:43:54.153049    5010 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 04:43:54.163242    5010 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 04:43:54.163254    5010 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 04:43:54.163320    5010 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 04:43:54.166876    5010 ssh_runner.go:195] Run: which lz4
	I1028 04:43:54.168155    5010 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 04:43:54.169447    5010 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 04:43:54.169457    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1028 04:43:55.127782    5010 docker.go:653] duration metric: took 959.660875ms to copy over tarball
	I1028 04:43:55.127856    5010 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 04:43:56.315986    5010 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.18810975s)
	I1028 04:43:56.316006    5010 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 04:43:56.332487    5010 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 04:43:56.335828    5010 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1028 04:43:56.340883    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:56.424411    5010 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 04:43:58.069300    5010 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.644865584s)
	I1028 04:43:58.069396    5010 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 04:43:58.083100    5010 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 04:43:58.083112    5010 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 04:43:58.083117    5010 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 04:43:58.089165    5010 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:58.090720    5010 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.092722    5010 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.094795    5010 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:58.100117    5010 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.100126    5010 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.100379    5010 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.101246    5010 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.102035    5010 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.102258    5010 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 04:43:58.103405    5010 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.103416    5010 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.103733    5010 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.103821    5010 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.104308    5010 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 04:43:58.105394    5010 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.617351    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.628486    5010 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1028 04:43:58.628526    5010 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.628583    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1028 04:43:58.647808    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1028 04:43:58.666301    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.676850    5010 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1028 04:43:58.676876    5010 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.676950    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 04:43:58.679106    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.689968    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1028 04:43:58.700374    5010 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1028 04:43:58.700398    5010 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.700452    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 04:43:58.712308    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1028 04:43:58.765176    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.777151    5010 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1028 04:43:58.777173    5010 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.777234    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 04:43:58.787541    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1028 04:43:58.793484    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.808402    5010 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1028 04:43:58.808431    5010 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.808502    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1028 04:43:58.819585    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1028 04:43:58.841669    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 04:43:58.856486    5010 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1028 04:43:58.856509    5010 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1028 04:43:58.856576    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1028 04:43:58.867524    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1028 04:43:58.867661    5010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 04:43:58.870147    5010 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1028 04:43:58.870166    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1028 04:43:58.879824    5010 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 04:43:58.879837    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1028 04:43:58.907529    5010 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1028 04:43:58.927969    5010 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1028 04:43:58.928131    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.946047    5010 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1028 04:43:58.946066    5010 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.946131    5010 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 04:43:58.956893    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 04:43:58.957041    5010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 04:43:58.958729    5010 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1028 04:43:58.958749    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W1028 04:43:58.996987    5010 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1028 04:43:58.997118    5010 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:59.005875    5010 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 04:43:59.005889    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1028 04:43:59.014612    5010 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1028 04:43:59.014636    5010 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:59.014701    5010 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:43:59.055590    5010 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 04:43:59.055711    5010 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 04:43:59.055898    5010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 04:43:59.057572    5010 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1028 04:43:59.057587    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1028 04:43:59.091855    5010 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 04:43:59.091878    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1028 04:43:59.355036    5010 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 04:43:59.355075    5010 cache_images.go:92] duration metric: took 1.271945833s to LoadCachedImages
	W1028 04:43:59.355118    5010 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1028 04:43:59.355123    5010 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1028 04:43:59.355184    5010 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-714000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 04:43:59.355273    5010 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 04:43:59.372658    5010 cni.go:84] Creating CNI manager for ""
	I1028 04:43:59.372677    5010 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:43:59.372686    5010 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 04:43:59.372695    5010 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-714000 NodeName:stopped-upgrade-714000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 04:43:59.372771    5010 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-714000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 04:43:59.372843    5010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1028 04:43:59.376108    5010 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 04:43:59.376149    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 04:43:59.378864    5010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1028 04:43:59.383853    5010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 04:43:59.388942    5010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1028 04:43:59.394283    5010 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1028 04:43:59.395496    5010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 04:43:59.399290    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:43:59.478039    5010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 04:43:59.486010    5010 certs.go:68] Setting up /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000 for IP: 10.0.2.15
	I1028 04:43:59.486023    5010 certs.go:194] generating shared ca certs ...
	I1028 04:43:59.486051    5010 certs.go:226] acquiring lock for ca certs: {Name:mk8f0a455373409f6ac5dde02ca67c613058d85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:43:59.486212    5010 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key
	I1028 04:43:59.486436    5010 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key
	I1028 04:43:59.486444    5010 certs.go:256] generating profile certs ...
	I1028 04:43:59.486626    5010 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.key
	I1028 04:43:59.486642    5010 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key.c6767b88
	I1028 04:43:59.486654    5010 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt.c6767b88 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1028 04:43:59.605686    5010 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt.c6767b88 ...
	I1028 04:43:59.605702    5010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt.c6767b88: {Name:mkf90a32438488277276118ea1523e9c870be5f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:43:59.605963    5010 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key.c6767b88 ...
	I1028 04:43:59.605968    5010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key.c6767b88: {Name:mk2533bda2712187e273c8edda27e29f50a220f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:43:59.606121    5010 certs.go:381] copying /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt.c6767b88 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt
	I1028 04:43:59.606236    5010 certs.go:385] copying /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key.c6767b88 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key
	I1028 04:43:59.606484    5010 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/proxy-client.key
	I1028 04:43:59.606629    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem (1338 bytes)
	W1028 04:43:59.606787    5010 certs.go:480] ignoring /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598_empty.pem, impossibly tiny 0 bytes
	I1028 04:43:59.606793    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 04:43:59.606816    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem (1078 bytes)
	I1028 04:43:59.606835    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem (1123 bytes)
	I1028 04:43:59.606853    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/key.pem (1679 bytes)
	I1028 04:43:59.606890    5010 certs.go:484] found cert: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem (1708 bytes)
	I1028 04:43:59.607211    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 04:43:59.614671    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 04:43:59.621476    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 04:43:59.628082    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 04:43:59.635089    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 04:43:59.642192    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 04:43:59.648684    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 04:43:59.655586    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 04:43:59.662462    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/ssl/certs/15982.pem --> /usr/share/ca-certificates/15982.pem (1708 bytes)
	I1028 04:43:59.668560    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 04:43:59.675729    5010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/1598.pem --> /usr/share/ca-certificates/1598.pem (1338 bytes)
	I1028 04:43:59.682796    5010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 04:43:59.688021    5010 ssh_runner.go:195] Run: openssl version
	I1028 04:43:59.690026    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1598.pem && ln -fs /usr/share/ca-certificates/1598.pem /etc/ssl/certs/1598.pem"
	I1028 04:43:59.692839    5010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1598.pem
	I1028 04:43:59.694106    5010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:47 /usr/share/ca-certificates/1598.pem
	I1028 04:43:59.694137    5010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1598.pem
	I1028 04:43:59.695695    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1598.pem /etc/ssl/certs/51391683.0"
	I1028 04:43:59.699144    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15982.pem && ln -fs /usr/share/ca-certificates/15982.pem /etc/ssl/certs/15982.pem"
	I1028 04:43:59.702249    5010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15982.pem
	I1028 04:43:59.703583    5010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:47 /usr/share/ca-certificates/15982.pem
	I1028 04:43:59.703610    5010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15982.pem
	I1028 04:43:59.705473    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15982.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 04:43:59.708258    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 04:43:59.711489    5010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:43:59.712781    5010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:40 /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:43:59.712801    5010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 04:43:59.714460    5010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 04:43:59.717216    5010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 04:43:59.718542    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 04:43:59.720889    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 04:43:59.722651    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 04:43:59.724672    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 04:43:59.726521    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 04:43:59.728340    5010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 04:43:59.730303    5010 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 04:43:59.730376    5010 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 04:43:59.740331    5010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 04:43:59.743659    5010 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 04:43:59.743664    5010 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 04:43:59.743704    5010 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 04:43:59.746478    5010 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 04:43:59.746776    5010 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-714000" does not appear in /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:43:59.746873    5010 kubeconfig.go:62] /Users/jenkins/minikube-integration/19876-1087/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-714000" cluster setting kubeconfig missing "stopped-upgrade-714000" context setting]
	I1028 04:43:59.747062    5010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:43:59.747484    5010 kapi.go:59] client config for stopped-upgrade-714000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.key", CAFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102d96680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 04:43:59.747934    5010 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 04:43:59.750735    5010 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-714000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1028 04:43:59.750741    5010 kubeadm.go:1160] stopping kube-system containers ...
	I1028 04:43:59.750787    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 04:43:59.761284    5010 docker.go:483] Stopping containers: [845363640e9e 4c70160b1032 5446ff2ad4cf 4deb81f71238 20726be67192 a3d1fe7e80ae cc397994f5aa a160fc9ffecb]
	I1028 04:43:59.761352    5010 ssh_runner.go:195] Run: docker stop 845363640e9e 4c70160b1032 5446ff2ad4cf 4deb81f71238 20726be67192 a3d1fe7e80ae cc397994f5aa a160fc9ffecb
	I1028 04:43:59.771777    5010 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 04:43:59.777823    5010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 04:43:59.780513    5010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 04:43:59.780519    5010 kubeadm.go:157] found existing configuration files:
	
	I1028 04:43:59.780547    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/admin.conf
	I1028 04:43:59.783416    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 04:43:59.783446    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 04:43:59.786617    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/kubelet.conf
	I1028 04:43:59.789351    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 04:43:59.789382    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 04:43:59.792075    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/controller-manager.conf
	I1028 04:43:59.795088    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 04:43:59.795116    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 04:43:59.798024    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/scheduler.conf
	I1028 04:43:59.800365    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 04:43:59.800397    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 04:43:59.803301    5010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 04:43:59.806334    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:43:59.827761    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:44:00.200704    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:44:00.336971    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:44:00.362378    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 04:44:00.385602    5010 api_server.go:52] waiting for apiserver process to appear ...
	I1028 04:44:00.385695    5010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:44:00.888088    5010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:44:01.387839    5010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:44:01.392194    5010 api_server.go:72] duration metric: took 1.006588875s to wait for apiserver process to appear ...
	I1028 04:44:01.392204    5010 api_server.go:88] waiting for apiserver healthz status ...
	I1028 04:44:01.392219    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:06.394425    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:06.394587    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:11.395509    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:11.395548    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:16.396191    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:16.396254    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:21.397275    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:21.397294    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:26.398099    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:26.398175    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:31.398983    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:31.398997    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:36.400366    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:36.400386    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:41.402279    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:41.402327    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:46.404692    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:46.404712    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:51.406984    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:51.407022    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:44:56.409318    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:44:56.409358    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:01.411706    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:01.411896    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:01.424419    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:01.424512    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:01.435508    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:01.435593    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:01.445887    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:01.445975    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:01.459364    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:01.459450    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:01.469961    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:01.470048    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:01.480724    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:01.480798    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:01.491109    5010 logs.go:282] 0 containers: []
	W1028 04:45:01.491122    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:01.491191    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:01.501707    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:01.501728    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:01.501733    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:01.515617    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:01.515627    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:01.526753    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:01.526766    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:01.539708    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:01.539716    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:01.554131    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:01.554142    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:01.558522    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:01.558531    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:01.600474    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:01.600485    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:01.611838    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:01.611849    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:01.708247    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:01.708257    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:01.729764    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:01.729773    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:01.741428    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:01.741442    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:01.761660    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:01.761671    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:01.779095    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:01.779105    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:01.817744    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:01.817758    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:01.834659    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:01.834670    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:01.846543    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:01.846556    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:01.859427    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:01.859439    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:04.387619    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:09.389940    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:09.390305    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:09.416587    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:09.416717    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:09.433921    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:09.434020    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:09.448718    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:09.448798    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:09.460116    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:09.460194    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:09.470253    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:09.470338    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:09.481459    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:09.481538    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:09.492069    5010 logs.go:282] 0 containers: []
	W1028 04:45:09.492083    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:09.492148    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:09.502232    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:09.502254    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:09.502260    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:09.514105    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:09.514119    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:09.527672    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:09.527684    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:09.558481    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:09.558490    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:09.571030    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:09.571041    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:09.584944    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:09.584960    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:09.599491    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:09.599502    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:09.614695    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:09.614706    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:09.626124    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:09.626135    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:09.638085    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:09.638103    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:09.650185    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:09.650194    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:09.662123    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:09.662137    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:09.698841    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:09.698852    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:09.702839    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:09.702848    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:09.740599    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:09.740615    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:09.778338    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:09.778349    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:09.795203    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:09.795214    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:12.314845    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:17.317242    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:17.317582    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:17.345705    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:17.345851    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:17.363728    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:17.363818    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:17.377293    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:17.377378    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:17.389143    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:17.389213    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:17.399934    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:17.400007    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:17.410535    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:17.410610    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:17.421042    5010 logs.go:282] 0 containers: []
	W1028 04:45:17.421057    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:17.421113    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:17.431485    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:17.431514    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:17.431521    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:17.436083    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:17.436092    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:17.474949    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:17.474966    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:17.490519    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:17.490529    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:17.502840    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:17.502854    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:17.519985    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:17.519997    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:17.531111    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:17.531123    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:17.554700    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:17.554706    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:17.590332    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:17.590343    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:17.604561    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:17.604570    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:17.615985    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:17.615999    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:17.627576    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:17.627586    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:17.664340    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:17.664348    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:17.678633    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:17.678644    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:17.694552    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:17.694562    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:17.705785    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:17.705796    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:17.719574    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:17.719584    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:20.233958    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:25.236265    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:25.236428    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:25.253513    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:25.253615    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:25.267004    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:25.267086    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:25.278036    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:25.278111    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:25.288717    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:25.288797    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:25.299374    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:25.299457    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:25.311800    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:25.311874    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:25.322457    5010 logs.go:282] 0 containers: []
	W1028 04:45:25.322473    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:25.322545    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:25.333541    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:25.333562    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:25.333567    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:25.373809    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:25.373823    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:25.392908    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:25.392920    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:25.409568    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:25.409579    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:25.447086    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:25.447097    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:25.464908    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:25.464919    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:25.477417    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:25.477427    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:25.488848    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:25.488859    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:25.514460    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:25.514468    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:25.518819    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:25.518828    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:25.532621    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:25.532630    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:25.549580    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:25.549591    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:25.560639    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:25.560650    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:25.572327    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:25.572339    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:25.583536    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:25.583546    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:25.622722    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:25.622732    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:25.637920    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:25.637929    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:28.151934    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:33.153521    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:33.153806    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:33.181354    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:33.181458    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:33.198507    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:33.198595    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:33.210823    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:33.210900    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:33.221571    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:33.221654    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:33.232584    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:33.232660    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:33.243352    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:33.243430    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:33.253616    5010 logs.go:282] 0 containers: []
	W1028 04:45:33.253628    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:33.253687    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:33.264012    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:33.264030    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:33.264035    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:33.300796    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:33.300806    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:33.338732    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:33.338743    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:33.356103    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:33.356113    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:33.371644    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:33.371655    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:33.383319    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:33.383331    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:33.387668    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:33.387679    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:33.401583    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:33.401593    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:33.415760    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:33.415770    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:33.426508    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:33.426519    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:33.438222    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:33.438232    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:33.451972    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:33.451986    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:33.469480    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:33.469493    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:33.482483    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:33.482494    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:33.519942    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:33.519953    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:33.532427    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:33.532441    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:33.545019    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:33.545029    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:36.071658    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:41.074071    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:41.074399    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:41.105562    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:41.105714    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:41.123431    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:41.123536    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:41.137077    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:41.137170    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:41.149585    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:41.149665    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:41.161182    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:41.161263    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:41.172059    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:41.172135    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:41.182459    5010 logs.go:282] 0 containers: []
	W1028 04:45:41.182476    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:41.182546    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:41.199661    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:41.199681    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:41.199688    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:41.215436    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:41.215447    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:41.234043    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:41.234054    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:41.248548    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:41.248562    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:41.259857    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:41.259869    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:41.264139    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:41.264146    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:41.279560    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:41.279574    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:41.316873    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:41.316883    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:41.341644    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:41.341656    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:41.355328    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:41.355345    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:41.390703    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:41.390713    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:41.402819    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:41.402830    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:41.415070    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:41.415079    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:41.428615    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:41.428631    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:41.444557    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:41.444571    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:41.480541    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:41.480549    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:41.494298    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:41.494311    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:44.014232    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:49.016689    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:49.016977    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:49.033037    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:49.033140    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:49.045887    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:49.045972    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:49.056824    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:49.056890    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:49.069212    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:49.069286    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:49.079611    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:49.079675    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:49.090501    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:49.090573    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:49.100510    5010 logs.go:282] 0 containers: []
	W1028 04:45:49.100525    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:49.100592    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:49.111094    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:49.111112    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:49.111117    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:49.121954    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:49.121963    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:49.133540    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:49.133550    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:49.154979    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:49.154988    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:49.167446    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:49.167455    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:49.181500    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:49.181509    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:49.195038    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:49.195046    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:49.209317    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:49.209326    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:49.220657    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:49.220668    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:49.236679    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:49.236688    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:49.274352    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:49.274369    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:49.318139    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:49.318150    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:49.329842    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:49.329854    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:49.342013    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:49.342024    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:49.353608    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:49.353618    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:49.357978    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:49.357987    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:49.396506    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:49.396517    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:51.923951    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:45:56.925463    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:45:56.925658    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:45:56.949180    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:45:56.949307    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:45:56.966537    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:45:56.966633    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:45:56.978931    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:45:56.979017    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:45:56.990090    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:45:56.990169    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:45:57.000635    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:45:57.000715    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:45:57.011509    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:45:57.011587    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:45:57.021989    5010 logs.go:282] 0 containers: []
	W1028 04:45:57.022001    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:45:57.022060    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:45:57.032631    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:45:57.032653    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:45:57.032659    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:45:57.057651    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:45:57.057659    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:45:57.069211    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:45:57.069223    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:45:57.073595    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:45:57.073601    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:45:57.109358    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:45:57.109369    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:45:57.120797    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:45:57.120810    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:45:57.134631    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:45:57.134644    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:45:57.150508    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:45:57.150518    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:45:57.165645    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:45:57.165657    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:45:57.205488    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:45:57.205499    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:45:57.218037    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:45:57.218047    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:45:57.229230    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:45:57.229241    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:45:57.243024    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:45:57.243033    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:45:57.258587    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:45:57.258598    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:45:57.295024    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:45:57.295037    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:45:57.310352    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:45:57.310362    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:45:57.322831    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:45:57.322842    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:45:59.842586    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:04.845353    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:04.845749    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:04.879590    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:04.879730    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:04.898484    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:04.898570    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:04.912736    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:04.912825    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:04.925614    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:04.925695    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:04.936489    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:04.936567    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:04.947441    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:04.947525    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:04.957786    5010 logs.go:282] 0 containers: []
	W1028 04:46:04.957802    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:04.957861    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:04.969062    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:04.969080    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:04.969085    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:04.980874    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:04.980885    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:04.998535    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:04.998545    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:05.034427    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:05.034438    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:05.048768    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:05.048782    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:05.062468    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:05.062480    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:05.101344    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:05.101357    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:05.116270    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:05.116282    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:05.128323    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:05.128337    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:05.147914    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:05.147925    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:05.161501    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:05.161512    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:05.166221    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:05.166229    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:05.182226    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:05.182238    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:05.215839    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:05.215854    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:05.231416    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:05.231428    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:05.254831    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:05.254840    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:05.290883    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:05.290891    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:07.808496    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:12.810919    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:12.811168    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:12.833832    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:12.833955    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:12.849570    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:12.849666    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:12.862011    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:12.862094    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:12.873318    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:12.873396    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:12.883984    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:12.884062    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:12.894842    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:12.894930    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:12.906403    5010 logs.go:282] 0 containers: []
	W1028 04:46:12.906415    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:12.906487    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:12.921370    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:12.921391    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:12.921398    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:12.937551    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:12.937560    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:12.973971    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:12.973985    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:13.010014    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:13.010028    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:13.024621    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:13.024631    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:13.039222    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:13.039232    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:13.060635    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:13.060651    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:13.074006    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:13.074016    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:13.085647    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:13.085657    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:13.097784    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:13.097793    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:13.109218    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:13.109229    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:13.114039    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:13.114045    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:13.131228    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:13.131237    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:13.155950    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:13.155959    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:13.193995    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:13.194005    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:13.206165    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:13.206176    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:13.218501    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:13.218512    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:15.741820    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:20.744154    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:20.744375    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:20.768921    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:20.769031    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:20.783301    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:20.783390    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:20.797458    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:20.797531    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:20.808227    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:20.808310    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:20.818421    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:20.818495    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:20.828708    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:20.828788    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:20.838766    5010 logs.go:282] 0 containers: []
	W1028 04:46:20.838777    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:20.838835    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:20.848996    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:20.849018    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:20.849023    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:20.888283    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:20.888296    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:20.900402    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:20.900414    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:20.919101    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:20.919113    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:20.943854    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:20.943862    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:20.981863    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:20.981875    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:20.996994    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:20.997006    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:21.008757    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:21.008766    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:21.020599    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:21.020610    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:21.024771    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:21.024778    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:21.065116    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:21.065126    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:21.076499    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:21.076512    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:21.094185    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:21.094195    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:21.106640    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:21.106650    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:21.123635    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:21.123647    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:21.138341    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:21.138352    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:21.149898    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:21.149910    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:23.665852    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:28.668152    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:28.668300    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:28.681649    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:28.681742    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:28.692924    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:28.693008    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:28.703154    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:28.703234    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:28.713853    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:28.713928    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:28.724214    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:28.724295    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:28.734923    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:28.734998    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:28.745549    5010 logs.go:282] 0 containers: []
	W1028 04:46:28.745561    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:28.745626    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:28.756045    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:28.756067    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:28.756073    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:28.760183    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:28.760192    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:28.794099    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:28.794113    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:28.809023    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:28.809033    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:28.820685    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:28.820695    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:28.844597    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:28.844605    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:28.882839    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:28.882847    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:28.920542    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:28.920553    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:28.934255    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:28.934264    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:28.945541    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:28.945552    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:28.966619    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:28.966629    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:28.980904    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:28.980913    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:28.995856    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:28.995865    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:29.007561    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:29.007570    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:29.023301    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:29.023311    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:29.034897    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:29.034909    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:29.045956    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:29.045967    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:31.560119    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:36.562478    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:36.562586    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:36.574046    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:36.574127    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:36.584634    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:36.584706    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:36.595111    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:36.595189    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:36.605381    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:36.605457    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:36.616113    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:36.616189    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:36.627366    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:36.627449    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:36.645087    5010 logs.go:282] 0 containers: []
	W1028 04:46:36.645099    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:36.645158    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:36.655709    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:36.655728    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:36.655733    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:36.690385    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:36.690396    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:36.705443    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:36.705454    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:36.743342    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:36.743355    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:36.759037    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:36.759048    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:36.780628    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:36.780638    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:36.820349    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:36.820359    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:36.832710    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:36.832720    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:36.843888    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:36.843898    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:36.856310    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:36.856320    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:36.869448    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:36.869458    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:36.883849    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:36.883860    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:36.896012    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:36.896023    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:36.908210    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:36.908220    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:36.926124    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:36.926133    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:36.938014    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:36.938024    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:36.961890    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:36.961897    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:39.468425    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:44.469326    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:44.469457    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:44.483452    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:44.483541    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:44.494613    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:44.494696    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:44.505554    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:44.505627    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:44.515674    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:44.515754    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:44.526618    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:44.526697    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:44.537561    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:44.537639    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:44.548186    5010 logs.go:282] 0 containers: []
	W1028 04:46:44.548198    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:44.548263    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:44.558778    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:44.558802    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:44.558807    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:44.595761    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:44.595769    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:44.599909    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:44.599915    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:44.636619    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:44.636629    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:44.648032    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:44.648046    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:44.659873    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:44.659883    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:44.694259    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:44.694270    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:44.709333    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:44.709343    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:44.720815    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:44.720824    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:44.732903    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:44.732913    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:44.749083    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:44.749092    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:44.771191    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:44.771200    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:44.788202    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:44.788214    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:44.802743    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:44.802752    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:44.814208    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:44.814221    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:44.825990    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:44.826000    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:44.843665    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:44.843678    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:47.361040    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:46:52.363805    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:46:52.364099    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:46:52.392847    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:46:52.392973    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:46:52.414678    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:46:52.414775    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:46:52.427470    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:46:52.427553    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:46:52.439239    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:46:52.439322    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:46:52.451551    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:46:52.451626    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:46:52.462422    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:46:52.462488    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:46:52.472804    5010 logs.go:282] 0 containers: []
	W1028 04:46:52.472816    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:46:52.472883    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:46:52.483984    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:46:52.484003    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:46:52.484008    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:46:52.500701    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:46:52.500712    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:46:52.523442    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:46:52.523452    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:46:52.561784    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:46:52.561795    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:46:52.575958    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:46:52.575969    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:46:52.590382    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:46:52.590392    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:46:52.602220    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:46:52.602230    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:46:52.620775    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:46:52.620790    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:46:52.633052    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:46:52.633065    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:46:52.644723    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:46:52.644733    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:46:52.656235    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:46:52.656249    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:46:52.692624    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:46:52.692633    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:46:52.727437    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:46:52.727448    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:46:52.745195    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:46:52.745205    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:46:52.761599    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:46:52.761609    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:46:52.776787    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:46:52.776798    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:46:52.780805    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:46:52.780811    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:46:55.294540    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:00.297288    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:00.297480    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:00.316840    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:00.316937    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:00.329341    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:00.329415    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:00.340234    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:00.340311    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:00.350639    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:00.350721    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:00.365178    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:00.365249    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:00.375829    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:00.375909    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:00.386606    5010 logs.go:282] 0 containers: []
	W1028 04:47:00.386631    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:00.386695    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:00.397500    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:00.397523    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:00.397528    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:00.411907    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:00.411917    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:00.423723    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:00.423735    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:00.439433    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:00.439442    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:00.450886    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:00.450898    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:00.455392    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:00.455400    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:00.491904    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:00.491916    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:00.503733    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:00.503749    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:00.525910    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:00.525921    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:00.551438    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:00.551448    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:00.563905    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:00.563917    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:00.603682    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:00.603694    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:00.622785    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:00.622794    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:00.637934    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:00.637945    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:00.653469    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:00.653478    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:00.664815    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:00.664827    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:00.703550    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:00.703562    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:03.217556    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:08.219904    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:08.220084    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:08.235958    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:08.236064    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:08.248988    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:08.249065    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:08.261987    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:08.262061    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:08.272141    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:08.272220    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:08.282658    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:08.282727    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:08.293654    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:08.293730    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:08.303548    5010 logs.go:282] 0 containers: []
	W1028 04:47:08.303559    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:08.303619    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:08.314411    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:08.314431    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:08.314436    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:08.326264    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:08.326273    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:08.359681    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:08.359690    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:08.399003    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:08.399012    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:08.433433    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:08.433445    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:08.447790    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:08.447801    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:08.459227    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:08.459237    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:08.470538    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:08.470548    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:08.495178    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:08.495189    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:08.499743    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:08.499750    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:08.513509    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:08.513518    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:08.529107    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:08.529116    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:08.543099    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:08.543113    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:08.554889    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:08.554898    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:08.569291    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:08.569305    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:08.611447    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:08.611458    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:08.623102    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:08.623115    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:11.136006    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:16.138364    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:16.138521    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:16.151688    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:16.151766    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:16.162459    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:16.162544    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:16.173485    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:16.173565    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:16.184106    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:16.184183    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:16.195289    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:16.195367    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:16.209175    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:16.209254    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:16.219625    5010 logs.go:282] 0 containers: []
	W1028 04:47:16.219638    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:16.219703    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:16.236724    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:16.236744    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:16.236750    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:16.275144    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:16.275156    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:16.287437    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:16.287449    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:16.298833    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:16.298844    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:16.310459    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:16.310473    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:16.324643    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:16.324653    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:16.336812    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:16.336825    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:16.351516    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:16.351529    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:16.366297    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:16.366307    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:16.378475    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:16.378486    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:16.399177    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:16.399187    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:16.436149    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:16.436161    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:16.440445    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:16.440454    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:16.475606    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:16.475617    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:16.489894    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:16.489905    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:16.504076    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:16.504089    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:16.515458    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:16.515469    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:19.040219    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:24.042830    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:24.043008    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:24.055456    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:24.055538    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:24.065924    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:24.066035    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:24.077122    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:24.077202    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:24.087447    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:24.087527    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:24.097897    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:24.097981    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:24.108647    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:24.108724    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:24.119040    5010 logs.go:282] 0 containers: []
	W1028 04:47:24.119051    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:24.119114    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:24.129668    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:24.129689    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:24.129698    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:24.141809    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:24.141823    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:24.156017    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:24.156029    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:24.167300    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:24.167310    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:24.178634    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:24.178645    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:24.193401    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:24.193411    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:24.212142    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:24.212152    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:24.225960    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:24.225971    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:24.241152    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:24.241163    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:24.253453    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:24.253462    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:24.258128    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:24.258134    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:24.293692    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:24.293703    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:24.331461    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:24.331472    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:24.345480    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:24.345491    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:24.357052    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:24.357061    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:24.368990    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:24.369000    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:24.392409    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:24.392417    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:26.933108    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:31.935433    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:31.935612    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:31.947726    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:31.947803    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:31.957929    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:31.958005    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:31.968033    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:31.968109    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:31.984390    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:31.984470    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:31.998969    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:31.999040    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:32.009707    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:32.009784    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:32.020204    5010 logs.go:282] 0 containers: []
	W1028 04:47:32.020217    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:32.020286    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:32.030911    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:32.030929    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:32.030935    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:32.045546    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:32.045557    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:32.056916    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:32.056927    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:32.068836    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:32.068846    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:32.082257    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:32.082268    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:32.093832    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:32.093843    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:32.104987    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:32.105001    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:32.143135    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:32.143143    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:32.180840    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:32.180850    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:32.218831    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:32.218842    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:32.231155    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:32.231165    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:32.248715    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:32.248727    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:32.253139    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:32.253149    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:32.264555    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:32.264567    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:32.279239    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:32.279251    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:32.292502    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:32.292511    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:32.315416    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:32.315423    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:34.831201    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:39.833578    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:39.833901    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:39.863748    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:39.863883    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:39.881497    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:39.881596    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:39.895811    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:39.895897    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:39.910110    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:39.910190    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:39.920891    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:39.920955    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:39.932378    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:39.932460    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:39.942723    5010 logs.go:282] 0 containers: []
	W1028 04:47:39.942739    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:39.942801    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:39.953198    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:39.953214    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:39.953219    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:39.967326    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:39.967339    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:39.983274    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:39.983286    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:40.000653    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:40.000663    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:40.012468    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:40.012481    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:40.047528    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:40.047540    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:40.066924    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:40.066935    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:40.078531    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:40.078541    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:40.092520    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:40.092530    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:40.115221    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:40.115236    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:40.119420    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:40.119427    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:40.133528    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:40.133537    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:40.145382    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:40.145391    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:40.182384    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:40.182398    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:40.194480    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:40.194494    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:40.208629    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:40.208644    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:40.220505    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:40.220520    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:42.760103    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:47.762668    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:47.763047    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:47.797419    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:47.797653    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:47.820100    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:47.820229    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:47.836382    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:47.836481    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:47.852070    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:47.852153    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:47.863937    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:47.864028    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:47.877101    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:47.877177    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:47.887552    5010 logs.go:282] 0 containers: []
	W1028 04:47:47.887562    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:47.887621    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:47.898208    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:47.898226    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:47.898231    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:47.922701    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:47.922713    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:47.945592    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:47.945604    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:47.969842    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:47.969851    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:47.981205    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:47.981218    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:47.995482    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:47.995494    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:48.033045    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:48.033054    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:48.044789    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:48.044802    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:48.056687    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:48.056697    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:48.068672    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:48.068682    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:48.079416    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:48.079428    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:48.101661    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:48.101674    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:48.138407    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:48.138416    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:48.174153    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:48.174166    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:48.189124    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:48.189133    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:48.193869    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:48.193877    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:48.209024    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:48.209035    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:50.726334    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:47:55.728846    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:47:55.729438    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:47:55.770223    5010 logs.go:282] 2 containers: [35577bdedd1d 4c70160b1032]
	I1028 04:47:55.770377    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:47:55.791423    5010 logs.go:282] 2 containers: [8d85be6f6ccb 845363640e9e]
	I1028 04:47:55.791534    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:47:55.809583    5010 logs.go:282] 1 containers: [cd29951ba80f]
	I1028 04:47:55.809673    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:47:55.828549    5010 logs.go:282] 2 containers: [cfe1947320d9 4deb81f71238]
	I1028 04:47:55.828629    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:47:55.839015    5010 logs.go:282] 1 containers: [e704ef938396]
	I1028 04:47:55.839101    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:47:55.849954    5010 logs.go:282] 2 containers: [2e2f8075c0c6 5446ff2ad4cf]
	I1028 04:47:55.850031    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:47:55.861385    5010 logs.go:282] 0 containers: []
	W1028 04:47:55.861397    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:47:55.861461    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:47:55.876952    5010 logs.go:282] 2 containers: [8e0e60687028 84cd26b10118]
	I1028 04:47:55.876971    5010 logs.go:123] Gathering logs for kube-controller-manager [2e2f8075c0c6] ...
	I1028 04:47:55.876977    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e2f8075c0c6"
	I1028 04:47:55.894717    5010 logs.go:123] Gathering logs for kube-controller-manager [5446ff2ad4cf] ...
	I1028 04:47:55.894726    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5446ff2ad4cf"
	I1028 04:47:55.907241    5010 logs.go:123] Gathering logs for storage-provisioner [84cd26b10118] ...
	I1028 04:47:55.907252    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd26b10118"
	I1028 04:47:55.935044    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:47:55.935056    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:47:55.939930    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:47:55.939941    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:47:55.978156    5010 logs.go:123] Gathering logs for etcd [8d85be6f6ccb] ...
	I1028 04:47:55.978166    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d85be6f6ccb"
	I1028 04:47:55.992074    5010 logs.go:123] Gathering logs for coredns [cd29951ba80f] ...
	I1028 04:47:55.992087    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd29951ba80f"
	I1028 04:47:56.003446    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:47:56.003457    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:47:56.040718    5010 logs.go:123] Gathering logs for kube-apiserver [35577bdedd1d] ...
	I1028 04:47:56.040729    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35577bdedd1d"
	I1028 04:47:56.055113    5010 logs.go:123] Gathering logs for storage-provisioner [8e0e60687028] ...
	I1028 04:47:56.055123    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e0e60687028"
	I1028 04:47:56.066678    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:47:56.066689    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:47:56.090090    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:47:56.090099    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:47:56.101636    5010 logs.go:123] Gathering logs for kube-apiserver [4c70160b1032] ...
	I1028 04:47:56.101645    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c70160b1032"
	I1028 04:47:56.144763    5010 logs.go:123] Gathering logs for etcd [845363640e9e] ...
	I1028 04:47:56.144772    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845363640e9e"
	I1028 04:47:56.160986    5010 logs.go:123] Gathering logs for kube-scheduler [cfe1947320d9] ...
	I1028 04:47:56.160996    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfe1947320d9"
	I1028 04:47:56.172962    5010 logs.go:123] Gathering logs for kube-scheduler [4deb81f71238] ...
	I1028 04:47:56.172971    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4deb81f71238"
	I1028 04:47:56.188877    5010 logs.go:123] Gathering logs for kube-proxy [e704ef938396] ...
	I1028 04:47:56.188887    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e704ef938396"
	I1028 04:47:58.703865    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:03.706285    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:03.706411    5010 kubeadm.go:597] duration metric: took 4m3.961812083s to restartPrimaryControlPlane
	W1028 04:48:03.706497    5010 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 04:48:03.706535    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1028 04:48:04.785041    5010 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.078488625s)
	I1028 04:48:04.785120    5010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 04:48:04.790752    5010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 04:48:04.793804    5010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 04:48:04.797027    5010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 04:48:04.797033    5010 kubeadm.go:157] found existing configuration files:
	
	I1028 04:48:04.797077    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/admin.conf
	I1028 04:48:04.799825    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 04:48:04.799859    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 04:48:04.802397    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/kubelet.conf
	I1028 04:48:04.805190    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 04:48:04.805241    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 04:48:04.808405    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/controller-manager.conf
	I1028 04:48:04.811181    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 04:48:04.811211    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 04:48:04.813744    5010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/scheduler.conf
	I1028 04:48:04.816778    5010 kubeadm.go:163] "https://control-plane.minikube.internal:57273" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57273 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 04:48:04.816809    5010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 04:48:04.819783    5010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 04:48:04.841724    5010 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1028 04:48:04.841800    5010 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 04:48:04.894801    5010 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 04:48:04.894860    5010 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 04:48:04.894909    5010 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 04:48:04.943155    5010 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 04:48:04.949280    5010 out.go:235]   - Generating certificates and keys ...
	I1028 04:48:04.949314    5010 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 04:48:04.949353    5010 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 04:48:04.949397    5010 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 04:48:04.949430    5010 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 04:48:04.949467    5010 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 04:48:04.949505    5010 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 04:48:04.949546    5010 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 04:48:04.949579    5010 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 04:48:04.949619    5010 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 04:48:04.949659    5010 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 04:48:04.949677    5010 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 04:48:04.949717    5010 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 04:48:05.099764    5010 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 04:48:05.224647    5010 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 04:48:05.313779    5010 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 04:48:05.379561    5010 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 04:48:05.409866    5010 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 04:48:05.410290    5010 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 04:48:05.410315    5010 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 04:48:05.494741    5010 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 04:48:05.501890    5010 out.go:235]   - Booting up control plane ...
	I1028 04:48:05.502055    5010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 04:48:05.502164    5010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 04:48:05.502208    5010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 04:48:05.502254    5010 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 04:48:05.502402    5010 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 04:48:09.505229    5010 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.003083 seconds
	I1028 04:48:09.505295    5010 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 04:48:09.508863    5010 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 04:48:10.017818    5010 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 04:48:10.017925    5010 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-714000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 04:48:10.521920    5010 kubeadm.go:310] [bootstrap-token] Using token: qsurol.sopdxvnxt7m0vkqj
	I1028 04:48:10.527469    5010 out.go:235]   - Configuring RBAC rules ...
	I1028 04:48:10.527518    5010 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 04:48:10.527560    5010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 04:48:10.529317    5010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 04:48:10.534090    5010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 04:48:10.535178    5010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 04:48:10.535957    5010 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 04:48:10.540332    5010 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 04:48:10.717989    5010 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 04:48:10.926405    5010 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 04:48:10.926828    5010 kubeadm.go:310] 
	I1028 04:48:10.926865    5010 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 04:48:10.926869    5010 kubeadm.go:310] 
	I1028 04:48:10.926913    5010 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 04:48:10.926920    5010 kubeadm.go:310] 
	I1028 04:48:10.926937    5010 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 04:48:10.926974    5010 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 04:48:10.927003    5010 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 04:48:10.927006    5010 kubeadm.go:310] 
	I1028 04:48:10.927032    5010 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 04:48:10.927034    5010 kubeadm.go:310] 
	I1028 04:48:10.927070    5010 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 04:48:10.927074    5010 kubeadm.go:310] 
	I1028 04:48:10.927103    5010 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 04:48:10.927147    5010 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 04:48:10.927188    5010 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 04:48:10.927192    5010 kubeadm.go:310] 
	I1028 04:48:10.927237    5010 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 04:48:10.927295    5010 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 04:48:10.927298    5010 kubeadm.go:310] 
	I1028 04:48:10.927346    5010 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qsurol.sopdxvnxt7m0vkqj \
	I1028 04:48:10.927413    5010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b1828748577e93ccb806e0aae973ddbc82f94e1a1a028b415724a35e8cf5acf \
	I1028 04:48:10.927423    5010 kubeadm.go:310] 	--control-plane 
	I1028 04:48:10.927426    5010 kubeadm.go:310] 
	I1028 04:48:10.927473    5010 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 04:48:10.927478    5010 kubeadm.go:310] 
	I1028 04:48:10.927528    5010 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qsurol.sopdxvnxt7m0vkqj \
	I1028 04:48:10.927587    5010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b1828748577e93ccb806e0aae973ddbc82f94e1a1a028b415724a35e8cf5acf 
	I1028 04:48:10.927700    5010 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 04:48:10.927757    5010 cni.go:84] Creating CNI manager for ""
	I1028 04:48:10.927767    5010 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:48:10.933723    5010 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 04:48:10.937728    5010 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 04:48:10.940708    5010 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 04:48:10.946570    5010 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 04:48:10.946648    5010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 04:48:10.946659    5010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-714000 minikube.k8s.io/updated_at=2024_10_28T04_48_10_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=stopped-upgrade-714000 minikube.k8s.io/primary=true
	I1028 04:48:10.985283    5010 kubeadm.go:1113] duration metric: took 38.693167ms to wait for elevateKubeSystemPrivileges
	I1028 04:48:10.985292    5010 ops.go:34] apiserver oom_adj: -16
	I1028 04:48:10.985300    5010 kubeadm.go:394] duration metric: took 4m11.254047625s to StartCluster
	I1028 04:48:10.985310    5010 settings.go:142] acquiring lock: {Name:mkb494d4e656a3be4717ac10e07a477c00ee7ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:48:10.985408    5010 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:48:10.985857    5010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/kubeconfig: {Name:mk86106150253bdc69b9602a0557ef2198523a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:48:10.986067    5010 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:48:10.986078    5010 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 04:48:10.986112    5010 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-714000"
	I1028 04:48:10.986120    5010 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-714000"
	W1028 04:48:10.986122    5010 addons.go:243] addon storage-provisioner should already be in state true
	I1028 04:48:10.986135    5010 host.go:66] Checking if "stopped-upgrade-714000" exists ...
	I1028 04:48:10.986139    5010 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-714000"
	I1028 04:48:10.986149    5010 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-714000"
	I1028 04:48:10.986305    5010 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:48:10.989549    5010 out.go:177] * Verifying Kubernetes components...
	I1028 04:48:10.990243    5010 kapi.go:59] client config for stopped-upgrade-714000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/stopped-upgrade-714000/client.key", CAFile:"/Users/jenkins/minikube-integration/19876-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102d96680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 04:48:10.993918    5010 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-714000"
	W1028 04:48:10.993923    5010 addons.go:243] addon default-storageclass should already be in state true
	I1028 04:48:10.993930    5010 host.go:66] Checking if "stopped-upgrade-714000" exists ...
	I1028 04:48:10.994439    5010 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 04:48:10.994444    5010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 04:48:10.994450    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:48:10.999667    5010 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 04:48:11.002719    5010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 04:48:11.008740    5010 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 04:48:11.008749    5010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 04:48:11.008758    5010 sshutil.go:53] new ssh client: &{IP:localhost Port:57238 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/stopped-upgrade-714000/id_rsa Username:docker}
	I1028 04:48:11.089879    5010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 04:48:11.095416    5010 api_server.go:52] waiting for apiserver process to appear ...
	I1028 04:48:11.095486    5010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 04:48:11.099338    5010 api_server.go:72] duration metric: took 113.260459ms to wait for apiserver process to appear ...
	I1028 04:48:11.099346    5010 api_server.go:88] waiting for apiserver healthz status ...
	I1028 04:48:11.099353    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:11.110533    5010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 04:48:11.171655    5010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 04:48:11.486864    5010 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 04:48:11.486876    5010 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 04:48:16.101466    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:16.101504    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:21.101795    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:21.101819    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:26.102159    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:26.102190    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:31.102593    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:31.102622    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:36.103321    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:36.103350    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:41.104120    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:41.104155    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1028 04:48:41.489262    5010 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1028 04:48:41.493421    5010 out.go:177] * Enabled addons: storage-provisioner
	I1028 04:48:41.502296    5010 addons.go:510] duration metric: took 30.516102959s for enable addons: enabled=[storage-provisioner]
	I1028 04:48:46.105240    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:46.105330    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:51.107020    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:51.107067    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:48:56.107599    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:48:56.107644    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:01.109412    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:01.109486    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:06.111691    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:06.111738    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:11.114121    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:11.114246    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:49:11.124517    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:49:11.124595    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:49:11.135053    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:49:11.135132    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:49:11.148506    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:49:11.148584    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:49:11.159395    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:49:11.159468    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:49:11.169474    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:49:11.169559    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:49:11.179673    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:49:11.179748    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:49:11.190418    5010 logs.go:282] 0 containers: []
	W1028 04:49:11.190428    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:49:11.190493    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:49:11.201036    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:49:11.201054    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:49:11.201059    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:49:11.218549    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:49:11.218557    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:49:11.243308    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:49:11.243317    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:49:11.254910    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:49:11.254920    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:49:11.289481    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:49:11.289492    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:49:11.293715    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:49:11.293722    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:49:11.310233    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:49:11.310246    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:49:11.327545    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:49:11.327556    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:49:11.339136    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:49:11.339147    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:49:11.349967    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:49:11.349977    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:49:11.387353    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:49:11.387366    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:49:11.401945    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:49:11.401956    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:49:11.413220    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:49:11.413231    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:49:13.930064    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:18.932962    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:18.933092    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:49:18.946611    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:49:18.946698    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:49:18.957277    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:49:18.957370    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:49:18.968170    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:49:18.968239    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:49:18.978895    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:49:18.978969    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:49:18.989290    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:49:18.989364    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:49:18.999613    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:49:18.999682    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:49:19.009773    5010 logs.go:282] 0 containers: []
	W1028 04:49:19.009785    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:49:19.009849    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:49:19.020337    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:49:19.020350    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:49:19.020355    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:49:19.035403    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:49:19.035415    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:49:19.052687    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:49:19.052697    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:49:19.078188    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:49:19.078197    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:49:19.112702    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:49:19.112713    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:49:19.126717    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:49:19.126730    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:49:19.138597    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:49:19.138607    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:49:19.151288    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:49:19.151306    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:49:19.164926    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:49:19.164939    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:49:19.178098    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:49:19.178111    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:49:19.183186    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:49:19.183198    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:49:19.227314    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:49:19.227332    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:49:19.243670    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:49:19.243689    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:49:21.763426    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:26.765781    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:26.766372    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:49:26.804873    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:49:26.805031    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:49:26.828948    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:49:26.829063    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:49:26.844514    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:49:26.844599    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:49:26.856859    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:49:26.856942    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:49:26.867686    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:49:26.867764    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:49:26.883146    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:49:26.883226    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:49:26.893598    5010 logs.go:282] 0 containers: []
	W1028 04:49:26.893610    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:49:26.893675    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:49:26.904400    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:49:26.904418    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:49:26.904423    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:49:26.923285    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:49:26.923295    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:49:26.941105    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:49:26.941114    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:49:26.952534    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:49:26.952546    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:49:26.976723    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:49:26.976733    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:49:26.980758    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:49:26.980764    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:49:26.994810    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:49:26.994823    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:49:27.006444    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:49:27.006457    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:49:27.018582    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:49:27.018594    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:49:27.040352    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:49:27.040362    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:49:27.052155    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:49:27.052168    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:49:27.085718    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:49:27.085732    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:49:27.120069    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:49:27.120083    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:49:29.637644    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:34.640055    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:34.640602    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:49:34.681323    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:49:34.681464    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:49:34.703027    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:49:34.703131    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:49:34.718217    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:49:34.718300    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:49:34.730909    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:49:34.730976    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:49:34.742013    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:49:34.742079    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:49:34.754766    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:49:34.754840    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:49:34.765110    5010 logs.go:282] 0 containers: []
	W1028 04:49:34.765122    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:49:34.765182    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:49:34.775507    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:49:34.775521    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:49:34.775529    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:49:34.780263    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:49:34.780271    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:49:34.795703    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:49:34.795715    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:49:34.811252    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:49:34.811265    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:49:34.822859    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:49:34.822870    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:49:34.834143    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:49:34.834152    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:49:34.855401    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:49:34.855410    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:49:34.873651    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:49:34.873658    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:49:34.908224    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:49:34.908232    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:49:34.944873    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:49:34.944883    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:49:34.959668    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:49:34.959678    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:49:34.971929    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:49:34.971942    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:49:34.996800    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:49:34.996808    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:49:37.510037    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:42.512434    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:42.512537    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:49:42.524658    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:49:42.524738    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:49:42.534602    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:49:42.534687    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:49:42.546603    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:49:42.546673    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:49:42.557280    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:49:42.557353    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:49:42.569317    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:49:42.569389    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:49:42.579661    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:49:42.579727    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:49:42.589293    5010 logs.go:282] 0 containers: []
	W1028 04:49:42.589303    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:49:42.589359    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:49:42.600016    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:49:42.600031    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:49:42.600037    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:49:42.635585    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:49:42.635598    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:49:42.650318    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:49:42.650330    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:49:42.661994    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:49:42.662003    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:49:42.673953    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:49:42.673967    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:49:42.706560    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:49:42.706567    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:49:42.710439    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:49:42.710446    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:49:42.723656    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:49:42.723669    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:49:42.740289    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:49:42.740300    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:49:42.755456    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:49:42.755467    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:49:42.767327    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:49:42.767338    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:49:42.785444    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:49:42.785455    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:49:42.797461    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:49:42.797475    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:49:45.322966    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:50.325778    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:50.326376    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:49:50.364018    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:49:50.364164    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:49:50.384911    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:49:50.385035    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:49:50.399859    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:49:50.399943    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:49:50.411682    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:49:50.411767    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:49:50.422928    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:49:50.422999    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:49:50.433865    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:49:50.433934    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:49:50.444379    5010 logs.go:282] 0 containers: []
	W1028 04:49:50.444388    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:49:50.444448    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:49:50.459294    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:49:50.459309    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:49:50.459315    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:49:50.471508    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:49:50.471521    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:49:50.489604    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:49:50.489617    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:49:50.501788    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:49:50.501800    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:49:50.536418    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:49:50.536432    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:49:50.548994    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:49:50.549008    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:49:50.567735    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:49:50.567747    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:49:50.585733    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:49:50.585743    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:49:50.597412    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:49:50.597424    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:49:50.621929    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:49:50.621939    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:49:50.633313    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:49:50.633327    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:49:50.668568    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:49:50.668576    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:49:50.673215    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:49:50.673223    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:49:53.190002    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:49:58.192318    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:49:58.192417    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:49:58.207417    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:49:58.207499    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:49:58.220970    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:49:58.221036    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:49:58.232033    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:49:58.232110    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:49:58.242339    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:49:58.242420    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:49:58.258500    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:49:58.258591    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:49:58.269334    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:49:58.269404    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:49:58.279787    5010 logs.go:282] 0 containers: []
	W1028 04:49:58.279799    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:49:58.279854    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:49:58.290050    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:49:58.290064    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:49:58.290069    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:49:58.301430    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:49:58.301444    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:49:58.326352    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:49:58.326363    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:49:58.330354    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:49:58.330362    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:49:58.364289    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:49:58.364302    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:49:58.378588    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:49:58.378600    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:49:58.397653    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:49:58.397665    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:49:58.412752    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:49:58.412764    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:49:58.424671    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:49:58.424682    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:49:58.437041    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:49:58.437050    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:49:58.471198    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:49:58.471205    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:49:58.486545    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:49:58.486557    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:49:58.498307    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:49:58.498321    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:50:01.020093    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:50:06.022529    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:50:06.022826    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:50:06.049747    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:50:06.049882    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:50:06.067540    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:50:06.067635    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:50:06.080781    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:50:06.080862    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:50:06.092301    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:50:06.092375    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:50:06.106491    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:50:06.106568    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:50:06.116929    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:50:06.117000    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:50:06.127293    5010 logs.go:282] 0 containers: []
	W1028 04:50:06.127306    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:50:06.127364    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:50:06.137593    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:50:06.137608    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:50:06.137614    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:50:06.142008    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:50:06.142017    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:50:06.156416    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:50:06.156427    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:50:06.175592    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:50:06.175604    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:50:06.187179    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:50:06.187190    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:50:06.199013    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:50:06.199026    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:50:06.231955    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:50:06.231963    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:50:06.268936    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:50:06.268949    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:50:06.283198    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:50:06.283211    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:50:06.298754    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:50:06.298766    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:50:06.314820    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:50:06.314831    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:50:06.335965    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:50:06.335977    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:50:06.361318    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:50:06.361327    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:50:08.879812    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:50:13.882415    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:50:13.882484    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:50:13.893424    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:50:13.893491    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:50:13.904301    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:50:13.904375    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:50:13.915575    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:50:13.915641    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:50:13.927130    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:50:13.927202    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:50:13.937467    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:50:13.937543    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:50:13.948859    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:50:13.948927    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:50:13.959187    5010 logs.go:282] 0 containers: []
	W1028 04:50:13.959198    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:50:13.959264    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:50:13.970319    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:50:13.970339    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:50:13.970345    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:50:13.981638    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:50:13.981648    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:50:13.996942    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:50:13.996950    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:50:14.014535    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:50:14.014545    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:50:14.026303    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:50:14.026313    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:50:14.059104    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:50:14.059112    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:50:14.063260    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:50:14.063265    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:50:14.097945    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:50:14.097956    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:50:14.113282    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:50:14.113292    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:50:14.136736    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:50:14.136744    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:50:14.148163    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:50:14.148179    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:50:14.161919    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:50:14.161930    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:50:14.173539    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:50:14.173549    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:50:16.687707    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:50:21.690367    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:50:21.690897    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:50:21.731456    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:50:21.731612    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:50:21.754835    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:50:21.754969    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:50:21.770649    5010 logs.go:282] 2 containers: [4d29424af73f 73f15bc18fb3]
	I1028 04:50:21.770739    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:50:21.783552    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:50:21.783630    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:50:21.794441    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:50:21.794519    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:50:21.804937    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:50:21.805017    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:50:21.816272    5010 logs.go:282] 0 containers: []
	W1028 04:50:21.816283    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:50:21.816339    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:50:21.826745    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:50:21.826759    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:50:21.826765    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:50:21.859377    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:50:21.859386    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:50:21.877052    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:50:21.877062    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:50:21.891319    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:50:21.891331    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:50:21.905515    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:50:21.905550    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:50:21.916791    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:50:21.916803    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:50:21.929753    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:50:21.929767    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:50:21.945616    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:50:21.945628    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:50:21.958629    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:50:21.958638    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:50:21.963017    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:50:21.963027    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:50:21.997605    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:50:21.997619    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:50:22.012058    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:50:22.012070    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:50:22.037094    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:50:22.037102    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:50:24.550390    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:50:29.552858    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:50:29.553108    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:50:29.573144    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:50:29.573250    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:50:29.586962    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:50:29.587038    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:50:29.603666    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:50:29.603747    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:50:29.619791    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:50:29.619867    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:50:29.630297    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:50:29.630362    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:50:29.640527    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:50:29.640589    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:50:29.650133    5010 logs.go:282] 0 containers: []
	W1028 04:50:29.650145    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:50:29.650208    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:50:29.660360    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:50:29.660377    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:50:29.660384    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:50:29.665016    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:50:29.665025    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:50:29.676091    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:50:29.676102    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:50:29.688457    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:50:29.688468    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:50:29.703775    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:50:29.703787    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:50:29.715038    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:50:29.715049    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:50:29.726296    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:50:29.726310    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:50:29.738171    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:50:29.738181    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:50:29.750608    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:50:29.750620    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:50:29.785946    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:50:29.785958    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:50:29.801182    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:50:29.801196    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:50:29.816095    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:50:29.816105    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:50:29.835467    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:50:29.835478    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:50:29.868241    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:50:29.868248    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:50:29.885655    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:50:29.885665    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:50:32.412552    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:50:37.415456    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:50:37.415992    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:50:37.460195    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:50:37.460353    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:50:37.480676    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:50:37.480784    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:50:37.496056    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:50:37.496142    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:50:37.509915    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:50:37.509999    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:50:37.520994    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:50:37.521067    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:50:37.531140    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:50:37.531207    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:50:37.541107    5010 logs.go:282] 0 containers: []
	W1028 04:50:37.541121    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:50:37.541181    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:50:37.558158    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:50:37.558175    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:50:37.558180    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:50:37.562500    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:50:37.562506    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:50:37.574164    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:50:37.574177    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:50:37.610587    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:50:37.610598    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:50:37.622918    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:50:37.622928    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:50:37.642658    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:50:37.642667    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:50:37.659857    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:50:37.659870    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:50:37.671958    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:50:37.671969    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:50:37.697499    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:50:37.697509    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:50:37.712046    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:50:37.712057    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:50:37.723207    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:50:37.723218    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:50:37.734690    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:50:37.734703    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:50:37.749647    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:50:37.749659    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:50:37.760961    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:50:37.760972    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:50:37.794331    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:50:37.794339    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:50:40.310998    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:50:45.313745    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:50:45.313822    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:50:45.328262    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:50:45.328338    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:50:45.339544    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:50:45.339616    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:50:45.352240    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:50:45.352317    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:50:45.364349    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:50:45.364405    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:50:45.379930    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:50:45.379996    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:50:45.392064    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:50:45.392141    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:50:45.403877    5010 logs.go:282] 0 containers: []
	W1028 04:50:45.403888    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:50:45.403940    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:50:45.423360    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:50:45.423376    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:50:45.423382    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:50:45.436270    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:50:45.436283    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:50:45.449501    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:50:45.449512    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:50:45.468069    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:50:45.468081    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:50:45.504942    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:50:45.504958    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:50:45.521959    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:50:45.521969    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:50:45.534516    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:50:45.534524    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:50:45.546096    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:50:45.546105    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:50:45.563617    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:50:45.563627    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:50:45.576566    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:50:45.576579    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:50:45.589922    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:50:45.589934    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:50:45.615430    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:50:45.615450    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:50:45.620833    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:50:45.620845    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:50:45.658157    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:50:45.658169    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:50:45.673466    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:50:45.673479    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:50:48.188377    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:50:53.190790    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:50:53.191363    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:50:53.232948    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:50:53.233098    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:50:53.254991    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:50:53.255115    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:50:53.270111    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:50:53.270198    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:50:53.282734    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:50:53.282810    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:50:53.293540    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:50:53.293612    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:50:53.304458    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:50:53.304528    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:50:53.314797    5010 logs.go:282] 0 containers: []
	W1028 04:50:53.314808    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:50:53.314879    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:50:53.326568    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:50:53.326589    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:50:53.326594    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:50:53.338604    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:50:53.338617    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:50:53.350350    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:50:53.350362    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:50:53.361950    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:50:53.361962    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:50:53.385778    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:50:53.385788    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:50:53.389736    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:50:53.389745    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:50:53.403802    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:50:53.403812    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:50:53.419556    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:50:53.419568    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:50:53.431263    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:50:53.431273    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:50:53.448412    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:50:53.448423    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:50:53.459972    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:50:53.459982    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:50:53.471956    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:50:53.471970    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:50:53.506109    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:50:53.506116    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:50:53.541068    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:50:53.541081    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:50:53.558169    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:50:53.558181    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:50:56.072477    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:51:01.075325    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:51:01.075758    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:51:01.108581    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:51:01.108718    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:51:01.126248    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:51:01.126338    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:51:01.139281    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:51:01.139356    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:51:01.151201    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:51:01.151277    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:51:01.161303    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:51:01.161370    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:51:01.171848    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:51:01.171909    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:51:01.181961    5010 logs.go:282] 0 containers: []
	W1028 04:51:01.181970    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:51:01.182024    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:51:01.197984    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:51:01.198002    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:51:01.198008    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:51:01.232465    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:51:01.232478    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:51:01.247496    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:51:01.247506    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:51:01.259270    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:51:01.259283    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:51:01.272589    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:51:01.272602    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:51:01.284421    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:51:01.284431    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:51:01.309958    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:51:01.309969    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:51:01.345775    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:51:01.345792    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:51:01.360490    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:51:01.360506    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:51:01.376410    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:51:01.376420    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:51:01.388377    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:51:01.388386    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:51:01.393332    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:51:01.393339    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:51:01.405208    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:51:01.405218    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:51:01.422035    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:51:01.422045    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:51:01.439251    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:51:01.439261    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:51:03.953494    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:51:08.956382    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:51:08.956989    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:51:08.998225    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:51:08.998381    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:51:09.020660    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:51:09.020780    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:51:09.036942    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:51:09.037031    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:51:09.049245    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:51:09.049320    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:51:09.060634    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:51:09.060710    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:51:09.071403    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:51:09.071479    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:51:09.082142    5010 logs.go:282] 0 containers: []
	W1028 04:51:09.082154    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:51:09.082216    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:51:09.092614    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:51:09.092643    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:51:09.092648    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:51:09.104422    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:51:09.104435    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:51:09.120525    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:51:09.120537    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:51:09.132408    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:51:09.132419    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:51:09.148363    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:51:09.148374    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:51:09.160820    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:51:09.160829    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:51:09.165242    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:51:09.165250    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:51:09.199240    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:51:09.199251    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:51:09.214101    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:51:09.214110    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:51:09.228573    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:51:09.228585    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:51:09.240246    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:51:09.240257    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:51:09.275277    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:51:09.275284    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:51:09.287674    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:51:09.287684    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:51:09.311787    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:51:09.311794    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:51:09.324297    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:51:09.324308    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:51:11.842940    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:51:16.845206    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:51:16.845688    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:51:16.878667    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:51:16.878806    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:51:16.897282    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:51:16.897382    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:51:16.911782    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:51:16.911868    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:51:16.924299    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:51:16.924374    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:51:16.935689    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:51:16.935756    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:51:16.951764    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:51:16.951838    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:51:16.961832    5010 logs.go:282] 0 containers: []
	W1028 04:51:16.961842    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:51:16.961898    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:51:16.974664    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:51:16.974681    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:51:16.974686    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:51:16.986701    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:51:16.986715    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:51:17.006055    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:51:17.006068    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:51:17.020885    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:51:17.020898    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:51:17.032555    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:51:17.032566    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:51:17.044263    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:51:17.044273    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:51:17.056313    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:51:17.056326    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:51:17.068178    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:51:17.068190    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:51:17.092155    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:51:17.092162    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:51:17.126418    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:51:17.126429    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:51:17.149881    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:51:17.149892    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:51:17.163845    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:51:17.163856    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:51:17.179863    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:51:17.179875    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:51:17.191698    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:51:17.191707    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:51:17.225186    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:51:17.225193    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:51:19.731224    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:51:24.733378    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:51:24.733475    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:51:24.747978    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:51:24.748056    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:51:24.760343    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:51:24.760414    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:51:24.771360    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:51:24.771435    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:51:24.786994    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:51:24.787069    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:51:24.796973    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:51:24.797041    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:51:24.810327    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:51:24.810391    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:51:24.820772    5010 logs.go:282] 0 containers: []
	W1028 04:51:24.820784    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:51:24.820839    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:51:24.830958    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:51:24.830975    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:51:24.830981    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:51:24.856516    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:51:24.856524    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:51:24.860764    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:51:24.860770    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:51:24.884133    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:51:24.884144    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:51:24.898015    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:51:24.898026    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:51:24.909871    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:51:24.909884    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:51:24.921167    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:51:24.921179    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:51:24.933091    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:51:24.933100    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:51:24.968095    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:51:24.968113    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:51:25.002951    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:51:25.002963    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:51:25.014902    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:51:25.014913    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:51:25.032130    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:51:25.032141    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:51:25.044569    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:51:25.044579    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:51:25.059981    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:51:25.059992    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:51:25.078850    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:51:25.078858    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:51:27.592646    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:51:32.595027    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:51:32.595239    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:51:32.619912    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:51:32.620040    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:51:32.636237    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:51:32.636318    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:51:32.649289    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:51:32.649371    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:51:32.660318    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:51:32.660388    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:51:32.670691    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:51:32.670770    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:51:32.681515    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:51:32.681586    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:51:32.692110    5010 logs.go:282] 0 containers: []
	W1028 04:51:32.692121    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:51:32.692182    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:51:32.702225    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:51:32.702243    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:51:32.702249    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:51:32.713492    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:51:32.713500    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:51:32.737837    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:51:32.737844    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:51:32.773377    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:51:32.773386    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:51:32.785403    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:51:32.785415    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:51:32.797376    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:51:32.797387    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:51:32.814040    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:51:32.814052    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:51:32.826097    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:51:32.826108    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:51:32.841401    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:51:32.841411    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:51:32.852831    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:51:32.852843    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:51:32.866996    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:51:32.867007    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:51:32.871658    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:51:32.871666    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:51:32.885964    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:51:32.885975    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:51:32.903311    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:51:32.903320    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:51:32.922498    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:51:32.922507    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:51:35.457595    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:51:40.459953    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:51:40.460418    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:51:40.501291    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:51:40.501451    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:51:40.523409    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:51:40.523541    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:51:40.540508    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:51:40.540606    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:51:40.553093    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:51:40.553176    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:51:40.563705    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:51:40.563778    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:51:40.574749    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:51:40.574823    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:51:40.585752    5010 logs.go:282] 0 containers: []
	W1028 04:51:40.585765    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:51:40.585830    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:51:40.601705    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:51:40.601725    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:51:40.601730    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:51:40.606090    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:51:40.606098    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:51:40.648137    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:51:40.648150    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:51:40.660160    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:51:40.660173    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:51:40.672000    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:51:40.672014    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:51:40.687681    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:51:40.687692    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:51:40.699681    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:51:40.699694    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:51:40.733058    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:51:40.733065    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:51:40.745562    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:51:40.745572    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:51:40.767649    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:51:40.767661    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:51:40.779735    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:51:40.779746    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:51:40.794344    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:51:40.794357    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:51:40.810032    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:51:40.810045    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:51:40.822422    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:51:40.822435    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:51:40.834410    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:51:40.834421    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:51:43.359669    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:51:48.362397    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:51:48.362489    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:51:48.374230    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:51:48.374307    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:51:48.386119    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:51:48.386186    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:51:48.397623    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:51:48.397689    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:51:48.408234    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:51:48.408313    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:51:48.424112    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:51:48.424190    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:51:48.436115    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:51:48.436177    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:51:48.446825    5010 logs.go:282] 0 containers: []
	W1028 04:51:48.446837    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:51:48.446892    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:51:48.457901    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:51:48.457917    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:51:48.457922    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:51:48.495083    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:51:48.495092    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:51:48.507112    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:51:48.507125    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:51:48.525030    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:51:48.525042    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:51:48.530382    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:51:48.530394    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:51:48.546325    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:51:48.546334    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:51:48.561834    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:51:48.561853    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:51:48.579023    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:51:48.579035    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:51:48.591520    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:51:48.591529    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:51:48.603995    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:51:48.604006    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:51:48.643396    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:51:48.643408    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:51:48.656917    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:51:48.656928    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:51:48.676213    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:51:48.676223    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:51:48.702456    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:51:48.702465    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:51:48.721017    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:51:48.721029    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:51:51.256665    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:51:56.259268    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:51:56.259421    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:51:56.278270    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:51:56.278371    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:51:56.297149    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:51:56.297228    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:51:56.309637    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:51:56.309721    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:51:56.326486    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:51:56.326566    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:51:56.339110    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:51:56.339195    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:51:56.351848    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:51:56.351927    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:51:56.363829    5010 logs.go:282] 0 containers: []
	W1028 04:51:56.363843    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:51:56.363912    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:51:56.377407    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:51:56.377429    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:51:56.377435    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:51:56.391995    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:51:56.392009    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:51:56.411905    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:51:56.411922    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:51:56.417172    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:51:56.417190    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:51:56.454664    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:51:56.454673    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:51:56.470691    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:51:56.470699    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:51:56.488455    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:51:56.488463    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:51:56.523046    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:51:56.523063    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:51:56.538679    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:51:56.538689    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:51:56.550770    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:51:56.550785    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:51:56.562010    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:51:56.562021    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:51:56.573681    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:51:56.573692    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:51:56.585503    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:51:56.585513    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:51:56.598097    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:51:56.598107    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:51:56.613793    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:51:56.613802    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:51:59.140298    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:52:04.142663    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:52:04.143192    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 04:52:04.175795    5010 logs.go:282] 1 containers: [fdf99c88acc1]
	I1028 04:52:04.175940    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 04:52:04.203364    5010 logs.go:282] 1 containers: [17f5fed9ac03]
	I1028 04:52:04.203463    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 04:52:04.216992    5010 logs.go:282] 4 containers: [196a1720a54f c6aaabf5a75d 4d29424af73f 73f15bc18fb3]
	I1028 04:52:04.217074    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 04:52:04.231094    5010 logs.go:282] 1 containers: [7cdd6c419b06]
	I1028 04:52:04.231164    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 04:52:04.243230    5010 logs.go:282] 1 containers: [5d452bd3f674]
	I1028 04:52:04.243301    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 04:52:04.253862    5010 logs.go:282] 1 containers: [6adf5360bae9]
	I1028 04:52:04.253932    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 04:52:04.264067    5010 logs.go:282] 0 containers: []
	W1028 04:52:04.264080    5010 logs.go:284] No container was found matching "kindnet"
	I1028 04:52:04.264141    5010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 04:52:04.274632    5010 logs.go:282] 1 containers: [0d06062365b7]
	I1028 04:52:04.274650    5010 logs.go:123] Gathering logs for kube-scheduler [7cdd6c419b06] ...
	I1028 04:52:04.274656    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd6c419b06"
	I1028 04:52:04.290035    5010 logs.go:123] Gathering logs for kube-proxy [5d452bd3f674] ...
	I1028 04:52:04.290047    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d452bd3f674"
	I1028 04:52:04.302127    5010 logs.go:123] Gathering logs for storage-provisioner [0d06062365b7] ...
	I1028 04:52:04.302138    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d06062365b7"
	I1028 04:52:04.320160    5010 logs.go:123] Gathering logs for Docker ...
	I1028 04:52:04.320172    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 04:52:04.343072    5010 logs.go:123] Gathering logs for container status ...
	I1028 04:52:04.343080    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 04:52:04.354752    5010 logs.go:123] Gathering logs for kube-apiserver [fdf99c88acc1] ...
	I1028 04:52:04.354763    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf99c88acc1"
	I1028 04:52:04.369518    5010 logs.go:123] Gathering logs for etcd [17f5fed9ac03] ...
	I1028 04:52:04.369529    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f5fed9ac03"
	I1028 04:52:04.383355    5010 logs.go:123] Gathering logs for describe nodes ...
	I1028 04:52:04.383368    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 04:52:04.417556    5010 logs.go:123] Gathering logs for coredns [c6aaabf5a75d] ...
	I1028 04:52:04.417569    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6aaabf5a75d"
	I1028 04:52:04.429322    5010 logs.go:123] Gathering logs for kubelet ...
	I1028 04:52:04.429335    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 04:52:04.464145    5010 logs.go:123] Gathering logs for dmesg ...
	I1028 04:52:04.464154    5010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 04:52:04.468490    5010 logs.go:123] Gathering logs for coredns [196a1720a54f] ...
	I1028 04:52:04.468498    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196a1720a54f"
	I1028 04:52:04.480469    5010 logs.go:123] Gathering logs for coredns [4d29424af73f] ...
	I1028 04:52:04.480479    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d29424af73f"
	I1028 04:52:04.492030    5010 logs.go:123] Gathering logs for coredns [73f15bc18fb3] ...
	I1028 04:52:04.492043    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73f15bc18fb3"
	I1028 04:52:04.503917    5010 logs.go:123] Gathering logs for kube-controller-manager [6adf5360bae9] ...
	I1028 04:52:04.503930    5010 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6adf5360bae9"
	I1028 04:52:07.022534    5010 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 04:52:12.025288    5010 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 04:52:12.029479    5010 out.go:201] 
	W1028 04:52:12.033515    5010 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1028 04:52:12.033522    5010 out.go:270] * 
	* 
	W1028 04:52:12.033933    5010 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:52:12.049459    5010 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-714000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.69s)

                                                
                                    
x
+
TestPause/serial/Start (9.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-540000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-540000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.765223083s)

                                                
                                                
-- stdout --
	* [pause-540000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-540000" primary control-plane node in "pause-540000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-540000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-540000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-540000 -n pause-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-540000 -n pause-540000: exit status 7 (61.241125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-818000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-818000 --driver=qemu2 : exit status 80 (9.895175667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-818000" primary control-plane node in "NoKubernetes-818000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-818000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-818000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-818000 -n NoKubernetes-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-818000 -n NoKubernetes-818000: exit status 7 (35.075416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-818000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-818000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243671209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-818000
	* Restarting existing qemu2 VM for "NoKubernetes-818000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-818000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-818000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-818000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-818000 -n NoKubernetes-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-818000 -n NoKubernetes-818000: exit status 7 (59.482166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-818000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-818000 --no-kubernetes --driver=qemu2 : exit status 80 (5.250426209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-818000
	* Restarting existing qemu2 VM for "NoKubernetes-818000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-818000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-818000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-818000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-818000 -n NoKubernetes-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-818000 -n NoKubernetes-818000: exit status 7 (73.092917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-818000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-818000 --driver=qemu2 : exit status 80 (5.283089209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-818000
	* Restarting existing qemu2 VM for "NoKubernetes-818000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-818000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-818000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-818000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-818000 -n NoKubernetes-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-818000 -n NoKubernetes-818000: exit status 7 (33.49ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.012233s)

                                                
                                                
-- stdout --
	* [auto-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-196000" primary control-plane node in "auto-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:50:22.849399    5214 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:50:22.849552    5214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:50:22.849556    5214 out.go:358] Setting ErrFile to fd 2...
	I1028 04:50:22.849559    5214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:50:22.849686    5214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:50:22.850837    5214 out.go:352] Setting JSON to false
	I1028 04:50:22.868929    5214 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4793,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:50:22.869036    5214 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:50:22.873680    5214 out.go:177] * [auto-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:50:22.881624    5214 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:50:22.881685    5214 notify.go:220] Checking for updates...
	I1028 04:50:22.887626    5214 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:50:22.890609    5214 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:50:22.891861    5214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:50:22.894643    5214 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:50:22.897624    5214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:50:22.900979    5214 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:50:22.901053    5214 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:50:22.901109    5214 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:50:22.905661    5214 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:50:22.912586    5214 start.go:297] selected driver: qemu2
	I1028 04:50:22.912592    5214 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:50:22.912598    5214 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:50:22.915019    5214 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:50:22.917527    5214 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:50:22.920706    5214 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:50:22.920729    5214 cni.go:84] Creating CNI manager for ""
	I1028 04:50:22.920756    5214 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:50:22.920765    5214 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:50:22.920793    5214 start.go:340] cluster config:
	{Name:auto-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:50:22.925814    5214 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:50:22.933585    5214 out.go:177] * Starting "auto-196000" primary control-plane node in "auto-196000" cluster
	I1028 04:50:22.936639    5214 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:50:22.936701    5214 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:50:22.936720    5214 cache.go:56] Caching tarball of preloaded images
	I1028 04:50:22.936845    5214 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:50:22.936852    5214 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:50:22.936914    5214 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/auto-196000/config.json ...
	I1028 04:50:22.936926    5214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/auto-196000/config.json: {Name:mk880c064b32d103478bee78c0803e246db72bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:50:22.937170    5214 start.go:360] acquireMachinesLock for auto-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:50:22.937218    5214 start.go:364] duration metric: took 38.708µs to acquireMachinesLock for "auto-196000"
	I1028 04:50:22.937229    5214 start.go:93] Provisioning new machine with config: &{Name:auto-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:50:22.937263    5214 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:50:22.941631    5214 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:50:22.956584    5214 start.go:159] libmachine.API.Create for "auto-196000" (driver="qemu2")
	I1028 04:50:22.956617    5214 client.go:168] LocalClient.Create starting
	I1028 04:50:22.956696    5214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:50:22.956735    5214 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:22.956746    5214 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:22.956787    5214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:50:22.956816    5214 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:22.956825    5214 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:22.957189    5214 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:50:23.113165    5214 main.go:141] libmachine: Creating SSH key...
	I1028 04:50:23.410821    5214 main.go:141] libmachine: Creating Disk image...
	I1028 04:50:23.410830    5214 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:50:23.411044    5214 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2
	I1028 04:50:23.421102    5214 main.go:141] libmachine: STDOUT: 
	I1028 04:50:23.421146    5214 main.go:141] libmachine: STDERR: 
	I1028 04:50:23.421203    5214 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2 +20000M
	I1028 04:50:23.429818    5214 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:50:23.429838    5214 main.go:141] libmachine: STDERR: 
	I1028 04:50:23.429860    5214 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2
	I1028 04:50:23.429865    5214 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:50:23.429876    5214 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:50:23.429910    5214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:ae:08:b7:29:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2
	I1028 04:50:23.431745    5214 main.go:141] libmachine: STDOUT: 
	I1028 04:50:23.431758    5214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:50:23.431785    5214 client.go:171] duration metric: took 475.159833ms to LocalClient.Create
	I1028 04:50:25.433993    5214 start.go:128] duration metric: took 2.4967s to createHost
	I1028 04:50:25.434053    5214 start.go:83] releasing machines lock for "auto-196000", held for 2.496818083s
	W1028 04:50:25.434105    5214 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:25.444354    5214 out.go:177] * Deleting "auto-196000" in qemu2 ...
	W1028 04:50:25.472352    5214 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:25.472377    5214 start.go:729] Will try again in 5 seconds ...
	I1028 04:50:30.474642    5214 start.go:360] acquireMachinesLock for auto-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:50:30.475220    5214 start.go:364] duration metric: took 444.459µs to acquireMachinesLock for "auto-196000"
	I1028 04:50:30.475336    5214 start.go:93] Provisioning new machine with config: &{Name:auto-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:50:30.475533    5214 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:50:30.483394    5214 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:50:30.524736    5214 start.go:159] libmachine.API.Create for "auto-196000" (driver="qemu2")
	I1028 04:50:30.524788    5214 client.go:168] LocalClient.Create starting
	I1028 04:50:30.524940    5214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:50:30.525028    5214 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:30.525043    5214 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:30.525104    5214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:50:30.525176    5214 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:30.525188    5214 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:30.525864    5214 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:50:30.693389    5214 main.go:141] libmachine: Creating SSH key...
	I1028 04:50:30.759373    5214 main.go:141] libmachine: Creating Disk image...
	I1028 04:50:30.759383    5214 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:50:30.759590    5214 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2
	I1028 04:50:30.769435    5214 main.go:141] libmachine: STDOUT: 
	I1028 04:50:30.769456    5214 main.go:141] libmachine: STDERR: 
	I1028 04:50:30.769516    5214 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2 +20000M
	I1028 04:50:30.778039    5214 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:50:30.778068    5214 main.go:141] libmachine: STDERR: 
	I1028 04:50:30.778083    5214 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2
	I1028 04:50:30.778089    5214 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:50:30.778096    5214 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:50:30.778131    5214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:50:92:88:4a:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/auto-196000/disk.qcow2
	I1028 04:50:30.779952    5214 main.go:141] libmachine: STDOUT: 
	I1028 04:50:30.779965    5214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:50:30.779977    5214 client.go:171] duration metric: took 255.183333ms to LocalClient.Create
	I1028 04:50:32.782208    5214 start.go:128] duration metric: took 2.306628375s to createHost
	I1028 04:50:32.782276    5214 start.go:83] releasing machines lock for "auto-196000", held for 2.307027541s
	W1028 04:50:32.782684    5214 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:32.798356    5214 out.go:201] 
	W1028 04:50:32.803407    5214 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:50:32.803478    5214 out.go:270] * 
	* 
	W1028 04:50:32.806660    5214 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:50:32.817333    5214 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.769145834s)

                                                
                                                
-- stdout --
	* [kindnet-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-196000" primary control-plane node in "kindnet-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:50:35.226426    5325 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:50:35.226563    5325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:50:35.226567    5325 out.go:358] Setting ErrFile to fd 2...
	I1028 04:50:35.226570    5325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:50:35.226705    5325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:50:35.227842    5325 out.go:352] Setting JSON to false
	I1028 04:50:35.246030    5325 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4806,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:50:35.246111    5325 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:50:35.250800    5325 out.go:177] * [kindnet-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:50:35.258888    5325 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:50:35.258977    5325 notify.go:220] Checking for updates...
	I1028 04:50:35.265756    5325 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:50:35.268778    5325 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:50:35.271729    5325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:50:35.274728    5325 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:50:35.277833    5325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:50:35.281151    5325 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:50:35.281225    5325 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:50:35.281281    5325 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:50:35.285766    5325 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:50:35.291800    5325 start.go:297] selected driver: qemu2
	I1028 04:50:35.291806    5325 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:50:35.291814    5325 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:50:35.294309    5325 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:50:35.297774    5325 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:50:35.300890    5325 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:50:35.300908    5325 cni.go:84] Creating CNI manager for "kindnet"
	I1028 04:50:35.300911    5325 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 04:50:35.300947    5325 start.go:340] cluster config:
	{Name:kindnet-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:50:35.305466    5325 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:50:35.313807    5325 out.go:177] * Starting "kindnet-196000" primary control-plane node in "kindnet-196000" cluster
	I1028 04:50:35.317829    5325 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:50:35.317844    5325 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:50:35.317854    5325 cache.go:56] Caching tarball of preloaded images
	I1028 04:50:35.317939    5325 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:50:35.317945    5325 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:50:35.318008    5325 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/kindnet-196000/config.json ...
	I1028 04:50:35.318018    5325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/kindnet-196000/config.json: {Name:mkf8d8c9d40785ba5ceee0ee481e77348317d163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:50:35.318369    5325 start.go:360] acquireMachinesLock for kindnet-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:50:35.318427    5325 start.go:364] duration metric: took 46.25µs to acquireMachinesLock for "kindnet-196000"
	I1028 04:50:35.318440    5325 start.go:93] Provisioning new machine with config: &{Name:kindnet-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:50:35.318476    5325 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:50:35.321793    5325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:50:35.336917    5325 start.go:159] libmachine.API.Create for "kindnet-196000" (driver="qemu2")
	I1028 04:50:35.336946    5325 client.go:168] LocalClient.Create starting
	I1028 04:50:35.337017    5325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:50:35.337060    5325 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:35.337071    5325 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:35.337110    5325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:50:35.337139    5325 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:35.337148    5325 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:35.337620    5325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:50:35.494826    5325 main.go:141] libmachine: Creating SSH key...
	I1028 04:50:35.567930    5325 main.go:141] libmachine: Creating Disk image...
	I1028 04:50:35.567939    5325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:50:35.568127    5325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2
	I1028 04:50:35.577979    5325 main.go:141] libmachine: STDOUT: 
	I1028 04:50:35.578003    5325 main.go:141] libmachine: STDERR: 
	I1028 04:50:35.578062    5325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2 +20000M
	I1028 04:50:35.586971    5325 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:50:35.586991    5325 main.go:141] libmachine: STDERR: 
	I1028 04:50:35.587021    5325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2
	I1028 04:50:35.587025    5325 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:50:35.587039    5325 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:50:35.587089    5325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:1d:b2:be:42:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2
	I1028 04:50:35.589007    5325 main.go:141] libmachine: STDOUT: 
	I1028 04:50:35.589023    5325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:50:35.589042    5325 client.go:171] duration metric: took 252.090292ms to LocalClient.Create
	I1028 04:50:37.590746    5325 start.go:128] duration metric: took 2.272254125s to createHost
	I1028 04:50:37.590791    5325 start.go:83] releasing machines lock for "kindnet-196000", held for 2.27232125s
	W1028 04:50:37.590803    5325 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:37.605352    5325 out.go:177] * Deleting "kindnet-196000" in qemu2 ...
	W1028 04:50:37.614373    5325 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:37.614382    5325 start.go:729] Will try again in 5 seconds ...
	I1028 04:50:42.616516    5325 start.go:360] acquireMachinesLock for kindnet-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:50:42.616839    5325 start.go:364] duration metric: took 279.667µs to acquireMachinesLock for "kindnet-196000"
	I1028 04:50:42.616904    5325 start.go:93] Provisioning new machine with config: &{Name:kindnet-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:50:42.617051    5325 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:50:42.629470    5325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:50:42.663173    5325 start.go:159] libmachine.API.Create for "kindnet-196000" (driver="qemu2")
	I1028 04:50:42.663211    5325 client.go:168] LocalClient.Create starting
	I1028 04:50:42.663330    5325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:50:42.663398    5325 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:42.663414    5325 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:42.663469    5325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:50:42.663518    5325 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:42.663528    5325 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:42.664224    5325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:50:42.827351    5325 main.go:141] libmachine: Creating SSH key...
	I1028 04:50:42.892699    5325 main.go:141] libmachine: Creating Disk image...
	I1028 04:50:42.892706    5325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:50:42.892910    5325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2
	I1028 04:50:42.902998    5325 main.go:141] libmachine: STDOUT: 
	I1028 04:50:42.903035    5325 main.go:141] libmachine: STDERR: 
	I1028 04:50:42.903092    5325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2 +20000M
	I1028 04:50:42.911552    5325 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:50:42.911569    5325 main.go:141] libmachine: STDERR: 
	I1028 04:50:42.911580    5325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2
	I1028 04:50:42.911591    5325 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:50:42.911601    5325 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:50:42.911640    5325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f6:3c:8b:c6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kindnet-196000/disk.qcow2
	I1028 04:50:42.913507    5325 main.go:141] libmachine: STDOUT: 
	I1028 04:50:42.913520    5325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:50:42.913533    5325 client.go:171] duration metric: took 250.314625ms to LocalClient.Create
	I1028 04:50:44.915732    5325 start.go:128] duration metric: took 2.2986495s to createHost
	I1028 04:50:44.915797    5325 start.go:83] releasing machines lock for "kindnet-196000", held for 2.298931583s
	W1028 04:50:44.916095    5325 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:44.933510    5325 out.go:201] 
	W1028 04:50:44.936554    5325 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:50:44.936580    5325 out.go:270] * 
	* 
	W1028 04:50:44.939078    5325 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:50:44.951468    5325 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.905309417s)

                                                
                                                
-- stdout --
	* [calico-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-196000" primary control-plane node in "calico-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:50:47.394929    5439 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:50:47.395103    5439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:50:47.395106    5439 out.go:358] Setting ErrFile to fd 2...
	I1028 04:50:47.395109    5439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:50:47.395243    5439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:50:47.396467    5439 out.go:352] Setting JSON to false
	I1028 04:50:47.414318    5439 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4818,"bootTime":1730111429,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:50:47.414389    5439 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:50:47.419680    5439 out.go:177] * [calico-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:50:47.427641    5439 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:50:47.427706    5439 notify.go:220] Checking for updates...
	I1028 04:50:47.433609    5439 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:50:47.436548    5439 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:50:47.439589    5439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:50:47.450616    5439 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:50:47.454666    5439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:50:47.457911    5439 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:50:47.457985    5439 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:50:47.458039    5439 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:50:47.462570    5439 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:50:47.468505    5439 start.go:297] selected driver: qemu2
	I1028 04:50:47.468512    5439 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:50:47.468517    5439 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:50:47.470937    5439 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:50:47.473571    5439 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:50:47.476702    5439 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:50:47.476718    5439 cni.go:84] Creating CNI manager for "calico"
	I1028 04:50:47.476722    5439 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1028 04:50:47.476747    5439 start.go:340] cluster config:
	{Name:calico-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:50:47.480959    5439 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:50:47.488613    5439 out.go:177] * Starting "calico-196000" primary control-plane node in "calico-196000" cluster
	I1028 04:50:47.492598    5439 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:50:47.492616    5439 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:50:47.492625    5439 cache.go:56] Caching tarball of preloaded images
	I1028 04:50:47.492691    5439 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:50:47.492696    5439 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:50:47.492750    5439 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/calico-196000/config.json ...
	I1028 04:50:47.492761    5439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/calico-196000/config.json: {Name:mkf61e9021db928576acb5d06fa9991dc66e7b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:50:47.493042    5439 start.go:360] acquireMachinesLock for calico-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:50:47.493087    5439 start.go:364] duration metric: took 39.458µs to acquireMachinesLock for "calico-196000"
	I1028 04:50:47.493098    5439 start.go:93] Provisioning new machine with config: &{Name:calico-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:50:47.493133    5439 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:50:47.500608    5439 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:50:47.515928    5439 start.go:159] libmachine.API.Create for "calico-196000" (driver="qemu2")
	I1028 04:50:47.515967    5439 client.go:168] LocalClient.Create starting
	I1028 04:50:47.516045    5439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:50:47.516083    5439 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:47.516094    5439 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:47.516134    5439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:50:47.516165    5439 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:47.516175    5439 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:47.516570    5439 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:50:47.673798    5439 main.go:141] libmachine: Creating SSH key...
	I1028 04:50:47.794641    5439 main.go:141] libmachine: Creating Disk image...
	I1028 04:50:47.794648    5439 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:50:47.794851    5439 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2
	I1028 04:50:47.805136    5439 main.go:141] libmachine: STDOUT: 
	I1028 04:50:47.805159    5439 main.go:141] libmachine: STDERR: 
	I1028 04:50:47.805214    5439 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2 +20000M
	I1028 04:50:47.814044    5439 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:50:47.814060    5439 main.go:141] libmachine: STDERR: 
	I1028 04:50:47.814085    5439 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2
	I1028 04:50:47.814091    5439 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:50:47.814108    5439 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:50:47.814153    5439 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:76:f1:6a:c3:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2
	I1028 04:50:47.816056    5439 main.go:141] libmachine: STDOUT: 
	I1028 04:50:47.816070    5439 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:50:47.816088    5439 client.go:171] duration metric: took 300.113542ms to LocalClient.Create
	I1028 04:50:49.818288    5439 start.go:128] duration metric: took 2.325116208s to createHost
	I1028 04:50:49.818388    5439 start.go:83] releasing machines lock for "calico-196000", held for 2.32528425s
	W1028 04:50:49.818443    5439 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:49.832516    5439 out.go:177] * Deleting "calico-196000" in qemu2 ...
	W1028 04:50:49.858543    5439 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:49.858571    5439 start.go:729] Will try again in 5 seconds ...
	I1028 04:50:54.860863    5439 start.go:360] acquireMachinesLock for calico-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:50:54.861408    5439 start.go:364] duration metric: took 435.208µs to acquireMachinesLock for "calico-196000"
	I1028 04:50:54.861470    5439 start.go:93] Provisioning new machine with config: &{Name:calico-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:50:54.861704    5439 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:50:54.871245    5439 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:50:54.911338    5439 start.go:159] libmachine.API.Create for "calico-196000" (driver="qemu2")
	I1028 04:50:54.911398    5439 client.go:168] LocalClient.Create starting
	I1028 04:50:54.911535    5439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:50:54.911613    5439 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:54.911627    5439 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:54.911686    5439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:50:54.911736    5439 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:54.911752    5439 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:54.912411    5439 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:50:55.075981    5439 main.go:141] libmachine: Creating SSH key...
	I1028 04:50:55.200674    5439 main.go:141] libmachine: Creating Disk image...
	I1028 04:50:55.200683    5439 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:50:55.200903    5439 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2
	I1028 04:50:55.211140    5439 main.go:141] libmachine: STDOUT: 
	I1028 04:50:55.211158    5439 main.go:141] libmachine: STDERR: 
	I1028 04:50:55.211208    5439 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2 +20000M
	I1028 04:50:55.219764    5439 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:50:55.219780    5439 main.go:141] libmachine: STDERR: 
	I1028 04:50:55.219797    5439 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2
	I1028 04:50:55.219806    5439 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:50:55.219814    5439 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:50:55.219840    5439 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:67:5b:69:8a:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/calico-196000/disk.qcow2
	I1028 04:50:55.221889    5439 main.go:141] libmachine: STDOUT: 
	I1028 04:50:55.221907    5439 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:50:55.221919    5439 client.go:171] duration metric: took 310.514209ms to LocalClient.Create
	I1028 04:50:57.224135    5439 start.go:128] duration metric: took 2.362384458s to createHost
	I1028 04:50:57.224219    5439 start.go:83] releasing machines lock for "calico-196000", held for 2.362778875s
	W1028 04:50:57.224636    5439 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:50:57.234356    5439 out.go:201] 
	W1028 04:50:57.241366    5439 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:50:57.241402    5439 out.go:270] * 
	* 
	W1028 04:50:57.243993    5439 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:50:57.254304    5439 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.834448041s)

                                                
                                                
-- stdout --
	* [custom-flannel-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-196000" primary control-plane node in "custom-flannel-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:50:59.840428    5556 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:50:59.840632    5556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:50:59.840635    5556 out.go:358] Setting ErrFile to fd 2...
	I1028 04:50:59.840638    5556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:50:59.840793    5556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:50:59.841970    5556 out.go:352] Setting JSON to false
	I1028 04:50:59.859980    5556 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4830,"bootTime":1730111429,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:50:59.860058    5556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:50:59.866927    5556 out.go:177] * [custom-flannel-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:50:59.873855    5556 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:50:59.873919    5556 notify.go:220] Checking for updates...
	I1028 04:50:59.881876    5556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:50:59.884912    5556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:50:59.887943    5556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:50:59.890936    5556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:50:59.893919    5556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:50:59.897289    5556 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:50:59.897371    5556 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:50:59.897426    5556 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:50:59.901945    5556 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:50:59.908877    5556 start.go:297] selected driver: qemu2
	I1028 04:50:59.908885    5556 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:50:59.908892    5556 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:50:59.911524    5556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:50:59.915929    5556 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:50:59.918876    5556 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:50:59.918894    5556 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1028 04:50:59.918903    5556 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1028 04:50:59.918942    5556 start.go:340] cluster config:
	{Name:custom-flannel-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:50:59.923678    5556 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:50:59.931902    5556 out.go:177] * Starting "custom-flannel-196000" primary control-plane node in "custom-flannel-196000" cluster
	I1028 04:50:59.935922    5556 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:50:59.935940    5556 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:50:59.935951    5556 cache.go:56] Caching tarball of preloaded images
	I1028 04:50:59.936035    5556 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:50:59.936040    5556 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:50:59.936104    5556 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/custom-flannel-196000/config.json ...
	I1028 04:50:59.936115    5556 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/custom-flannel-196000/config.json: {Name:mkf04a008fc35fd8176bb347eb4f1b5e2edc9723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:50:59.936490    5556 start.go:360] acquireMachinesLock for custom-flannel-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:50:59.936538    5556 start.go:364] duration metric: took 41.292µs to acquireMachinesLock for "custom-flannel-196000"
	I1028 04:50:59.936549    5556 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:50:59.936583    5556 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:50:59.938696    5556 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:50:59.955071    5556 start.go:159] libmachine.API.Create for "custom-flannel-196000" (driver="qemu2")
	I1028 04:50:59.955100    5556 client.go:168] LocalClient.Create starting
	I1028 04:50:59.955189    5556 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:50:59.955225    5556 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:59.955237    5556 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:59.955273    5556 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:50:59.955301    5556 main.go:141] libmachine: Decoding PEM data...
	I1028 04:50:59.955308    5556 main.go:141] libmachine: Parsing certificate...
	I1028 04:50:59.955726    5556 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:00.110629    5556 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:00.174853    5556 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:00.174859    5556 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:00.175036    5556 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2
	I1028 04:51:00.184987    5556 main.go:141] libmachine: STDOUT: 
	I1028 04:51:00.185016    5556 main.go:141] libmachine: STDERR: 
	I1028 04:51:00.185084    5556 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2 +20000M
	I1028 04:51:00.193529    5556 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:00.193544    5556 main.go:141] libmachine: STDERR: 
	I1028 04:51:00.193563    5556 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2
	I1028 04:51:00.193568    5556 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:00.193580    5556 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:00.193609    5556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:7d:a8:e6:f3:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2
	I1028 04:51:00.195423    5556 main.go:141] libmachine: STDOUT: 
	I1028 04:51:00.195439    5556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:00.195458    5556 client.go:171] duration metric: took 240.35175ms to LocalClient.Create
	I1028 04:51:02.197684    5556 start.go:128] duration metric: took 2.261061125s to createHost
	I1028 04:51:02.197808    5556 start.go:83] releasing machines lock for "custom-flannel-196000", held for 2.26125175s
	W1028 04:51:02.197901    5556 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:02.204253    5556 out.go:177] * Deleting "custom-flannel-196000" in qemu2 ...
	W1028 04:51:02.239550    5556 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:02.239596    5556 start.go:729] Will try again in 5 seconds ...
	I1028 04:51:07.241911    5556 start.go:360] acquireMachinesLock for custom-flannel-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:07.242444    5556 start.go:364] duration metric: took 424.75µs to acquireMachinesLock for "custom-flannel-196000"
	I1028 04:51:07.242615    5556 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:07.242844    5556 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:07.251492    5556 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:07.299502    5556 start.go:159] libmachine.API.Create for "custom-flannel-196000" (driver="qemu2")
	I1028 04:51:07.299559    5556 client.go:168] LocalClient.Create starting
	I1028 04:51:07.299691    5556 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:07.299792    5556 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:07.299811    5556 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:07.299891    5556 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:07.299950    5556 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:07.299963    5556 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:07.300526    5556 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:07.467937    5556 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:07.585718    5556 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:07.585727    5556 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:07.585952    5556 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2
	I1028 04:51:07.596246    5556 main.go:141] libmachine: STDOUT: 
	I1028 04:51:07.596275    5556 main.go:141] libmachine: STDERR: 
	I1028 04:51:07.596352    5556 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2 +20000M
	I1028 04:51:07.605268    5556 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:07.605286    5556 main.go:141] libmachine: STDERR: 
	I1028 04:51:07.605306    5556 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2
	I1028 04:51:07.605311    5556 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:07.605319    5556 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:07.605354    5556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:eb:ee:ac:dc:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/custom-flannel-196000/disk.qcow2
	I1028 04:51:07.607410    5556 main.go:141] libmachine: STDOUT: 
	I1028 04:51:07.607427    5556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:07.607440    5556 client.go:171] duration metric: took 307.873666ms to LocalClient.Create
	I1028 04:51:09.609535    5556 start.go:128] duration metric: took 2.36666325s to createHost
	I1028 04:51:09.609602    5556 start.go:83] releasing machines lock for "custom-flannel-196000", held for 2.367128958s
	W1028 04:51:09.609776    5556 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:09.620983    5556 out.go:201] 
	W1028 04:51:09.625087    5556 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:51:09.625093    5556 out.go:270] * 
	* 
	W1028 04:51:09.625595    5556 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:51:09.635062    5556 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.762530709s)

                                                
                                                
-- stdout --
	* [false-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-196000" primary control-plane node in "false-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:51:12.168965    5673 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:51:12.169133    5673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:51:12.169136    5673 out.go:358] Setting ErrFile to fd 2...
	I1028 04:51:12.169138    5673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:51:12.169285    5673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:51:12.170464    5673 out.go:352] Setting JSON to false
	I1028 04:51:12.188508    5673 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4843,"bootTime":1730111429,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:51:12.188594    5673 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:51:12.194300    5673 out.go:177] * [false-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:51:12.202389    5673 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:51:12.202465    5673 notify.go:220] Checking for updates...
	I1028 04:51:12.210299    5673 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:51:12.213237    5673 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:51:12.216394    5673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:51:12.219399    5673 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:51:12.220801    5673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:51:12.223634    5673 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:51:12.223712    5673 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:51:12.223768    5673 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:51:12.228312    5673 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:51:12.233322    5673 start.go:297] selected driver: qemu2
	I1028 04:51:12.233329    5673 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:51:12.233337    5673 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:51:12.235687    5673 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:51:12.239373    5673 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:51:12.240780    5673 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:51:12.240797    5673 cni.go:84] Creating CNI manager for "false"
	I1028 04:51:12.240827    5673 start.go:340] cluster config:
	{Name:false-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:51:12.245120    5673 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:51:12.253369    5673 out.go:177] * Starting "false-196000" primary control-plane node in "false-196000" cluster
	I1028 04:51:12.257229    5673 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:51:12.257244    5673 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:51:12.257255    5673 cache.go:56] Caching tarball of preloaded images
	I1028 04:51:12.257322    5673 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:51:12.257327    5673 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:51:12.257381    5673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/false-196000/config.json ...
	I1028 04:51:12.257392    5673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/false-196000/config.json: {Name:mk63e301f5d12741e456be7e89fac27b14d56513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:51:12.257666    5673 start.go:360] acquireMachinesLock for false-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:12.257708    5673 start.go:364] duration metric: took 37.208µs to acquireMachinesLock for "false-196000"
	I1028 04:51:12.257720    5673 start.go:93] Provisioning new machine with config: &{Name:false-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:12.257742    5673 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:12.260384    5673 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:12.275225    5673 start.go:159] libmachine.API.Create for "false-196000" (driver="qemu2")
	I1028 04:51:12.275255    5673 client.go:168] LocalClient.Create starting
	I1028 04:51:12.275333    5673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:12.275368    5673 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:12.275380    5673 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:12.275416    5673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:12.275445    5673 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:12.275457    5673 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:12.275815    5673 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:12.433779    5673 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:12.462451    5673 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:12.462458    5673 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:12.462645    5673 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2
	I1028 04:51:12.473007    5673 main.go:141] libmachine: STDOUT: 
	I1028 04:51:12.473026    5673 main.go:141] libmachine: STDERR: 
	I1028 04:51:12.473086    5673 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2 +20000M
	I1028 04:51:12.481995    5673 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:12.482017    5673 main.go:141] libmachine: STDERR: 
	I1028 04:51:12.482037    5673 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2
	I1028 04:51:12.482042    5673 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:12.482054    5673 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:12.482085    5673 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d5:5d:bf:60:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2
	I1028 04:51:12.484069    5673 main.go:141] libmachine: STDOUT: 
	I1028 04:51:12.484088    5673 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:12.484110    5673 client.go:171] duration metric: took 208.847209ms to LocalClient.Create
	I1028 04:51:14.486221    5673 start.go:128] duration metric: took 2.228455s to createHost
	I1028 04:51:14.486299    5673 start.go:83] releasing machines lock for "false-196000", held for 2.228576292s
	W1028 04:51:14.486327    5673 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:14.496569    5673 out.go:177] * Deleting "false-196000" in qemu2 ...
	W1028 04:51:14.514073    5673 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:14.514084    5673 start.go:729] Will try again in 5 seconds ...
	I1028 04:51:19.514459    5673 start.go:360] acquireMachinesLock for false-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:19.514793    5673 start.go:364] duration metric: took 291.5µs to acquireMachinesLock for "false-196000"
	I1028 04:51:19.514857    5673 start.go:93] Provisioning new machine with config: &{Name:false-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:19.515012    5673 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:19.526736    5673 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:19.554090    5673 start.go:159] libmachine.API.Create for "false-196000" (driver="qemu2")
	I1028 04:51:19.554126    5673 client.go:168] LocalClient.Create starting
	I1028 04:51:19.554224    5673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:19.554282    5673 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:19.554298    5673 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:19.554340    5673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:19.554378    5673 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:19.554388    5673 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:19.554785    5673 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:19.714541    5673 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:19.836993    5673 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:19.837001    5673 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:19.837193    5673 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2
	I1028 04:51:19.847377    5673 main.go:141] libmachine: STDOUT: 
	I1028 04:51:19.847397    5673 main.go:141] libmachine: STDERR: 
	I1028 04:51:19.847462    5673 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2 +20000M
	I1028 04:51:19.855978    5673 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:19.855994    5673 main.go:141] libmachine: STDERR: 
	I1028 04:51:19.856006    5673 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2
	I1028 04:51:19.856010    5673 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:19.856019    5673 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:19.856050    5673 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:cd:f7:a8:fa:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/false-196000/disk.qcow2
	I1028 04:51:19.857854    5673 main.go:141] libmachine: STDOUT: 
	I1028 04:51:19.857869    5673 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:19.857881    5673 client.go:171] duration metric: took 303.748458ms to LocalClient.Create
	I1028 04:51:21.860122    5673 start.go:128] duration metric: took 2.345048167s to createHost
	I1028 04:51:21.860226    5673 start.go:83] releasing machines lock for "false-196000", held for 2.345405291s
	W1028 04:51:21.860613    5673 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:21.870130    5673 out.go:201] 
	W1028 04:51:21.874282    5673 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:51:21.874320    5673 out.go:270] * 
	* 
	W1028 04:51:21.876187    5673 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:51:21.886214    5673 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.788069875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-196000" primary control-plane node in "enable-default-cni-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:51:24.268155    5782 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:51:24.268327    5782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:51:24.268330    5782 out.go:358] Setting ErrFile to fd 2...
	I1028 04:51:24.268332    5782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:51:24.268467    5782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:51:24.269634    5782 out.go:352] Setting JSON to false
	I1028 04:51:24.287477    5782 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4855,"bootTime":1730111429,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:51:24.287556    5782 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:51:24.292691    5782 out.go:177] * [enable-default-cni-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:51:24.300796    5782 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:51:24.300830    5782 notify.go:220] Checking for updates...
	I1028 04:51:24.309718    5782 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:51:24.312728    5782 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:51:24.316790    5782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:51:24.319709    5782 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:51:24.322788    5782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:51:24.326158    5782 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:51:24.326239    5782 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:51:24.326292    5782 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:51:24.330755    5782 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:51:24.337610    5782 start.go:297] selected driver: qemu2
	I1028 04:51:24.337615    5782 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:51:24.337621    5782 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:51:24.340168    5782 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:51:24.343782    5782 out.go:177] * Automatically selected the socket_vmnet network
	E1028 04:51:24.346852    5782 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1028 04:51:24.346864    5782 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:51:24.346940    5782 cni.go:84] Creating CNI manager for "bridge"
	I1028 04:51:24.346944    5782 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:51:24.346975    5782 start.go:340] cluster config:
	{Name:enable-default-cni-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:51:24.351685    5782 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:51:24.359784    5782 out.go:177] * Starting "enable-default-cni-196000" primary control-plane node in "enable-default-cni-196000" cluster
	I1028 04:51:24.363687    5782 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:51:24.363702    5782 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:51:24.363711    5782 cache.go:56] Caching tarball of preloaded images
	I1028 04:51:24.363788    5782 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:51:24.363800    5782 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:51:24.363857    5782 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/enable-default-cni-196000/config.json ...
	I1028 04:51:24.363874    5782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/enable-default-cni-196000/config.json: {Name:mk261e5b23a9cd6e5587b2fb84e7a44f9a00177e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:51:24.364113    5782 start.go:360] acquireMachinesLock for enable-default-cni-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:24.364159    5782 start.go:364] duration metric: took 38.875µs to acquireMachinesLock for "enable-default-cni-196000"
	I1028 04:51:24.364171    5782 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:24.364219    5782 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:24.368763    5782 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:24.384912    5782 start.go:159] libmachine.API.Create for "enable-default-cni-196000" (driver="qemu2")
	I1028 04:51:24.384951    5782 client.go:168] LocalClient.Create starting
	I1028 04:51:24.385030    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:24.385070    5782 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:24.385080    5782 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:24.385121    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:24.385149    5782 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:24.385158    5782 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:24.385587    5782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:24.545579    5782 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:24.596331    5782 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:24.596339    5782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:24.596533    5782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2
	I1028 04:51:24.606503    5782 main.go:141] libmachine: STDOUT: 
	I1028 04:51:24.606525    5782 main.go:141] libmachine: STDERR: 
	I1028 04:51:24.606591    5782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2 +20000M
	I1028 04:51:24.615056    5782 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:24.615073    5782 main.go:141] libmachine: STDERR: 
	I1028 04:51:24.615086    5782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2
	I1028 04:51:24.615093    5782 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:24.615106    5782 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:24.615142    5782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a6:55:65:de:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2
	I1028 04:51:24.616978    5782 main.go:141] libmachine: STDOUT: 
	I1028 04:51:24.616994    5782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:24.617011    5782 client.go:171] duration metric: took 232.053334ms to LocalClient.Create
	I1028 04:51:26.619168    5782 start.go:128] duration metric: took 2.254928042s to createHost
	I1028 04:51:26.619204    5782 start.go:83] releasing machines lock for "enable-default-cni-196000", held for 2.255030292s
	W1028 04:51:26.619256    5782 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:26.628382    5782 out.go:177] * Deleting "enable-default-cni-196000" in qemu2 ...
	W1028 04:51:26.647182    5782 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:26.647193    5782 start.go:729] Will try again in 5 seconds ...
	I1028 04:51:31.649515    5782 start.go:360] acquireMachinesLock for enable-default-cni-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:31.650170    5782 start.go:364] duration metric: took 525.459µs to acquireMachinesLock for "enable-default-cni-196000"
	I1028 04:51:31.650347    5782 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:31.650637    5782 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:31.665198    5782 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:31.713299    5782 start.go:159] libmachine.API.Create for "enable-default-cni-196000" (driver="qemu2")
	I1028 04:51:31.713390    5782 client.go:168] LocalClient.Create starting
	I1028 04:51:31.713605    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:31.713705    5782 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:31.713724    5782 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:31.713809    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:31.713870    5782 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:31.713886    5782 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:31.714651    5782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:31.882706    5782 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:31.965494    5782 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:31.965503    5782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:31.965693    5782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2
	I1028 04:51:31.975996    5782 main.go:141] libmachine: STDOUT: 
	I1028 04:51:31.976019    5782 main.go:141] libmachine: STDERR: 
	I1028 04:51:31.976095    5782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2 +20000M
	I1028 04:51:31.985129    5782 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:31.985144    5782 main.go:141] libmachine: STDERR: 
	I1028 04:51:31.985158    5782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2
	I1028 04:51:31.985162    5782 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:31.985172    5782 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:31.985204    5782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:f7:ec:f2:29:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/enable-default-cni-196000/disk.qcow2
	I1028 04:51:31.987060    5782 main.go:141] libmachine: STDOUT: 
	I1028 04:51:31.987075    5782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:31.987088    5782 client.go:171] duration metric: took 273.67825ms to LocalClient.Create
	I1028 04:51:33.989269    5782 start.go:128] duration metric: took 2.338581667s to createHost
	I1028 04:51:33.989331    5782 start.go:83] releasing machines lock for "enable-default-cni-196000", held for 2.339130417s
	W1028 04:51:33.989765    5782 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:33.997210    5782 out.go:201] 
	W1028 04:51:34.002358    5782 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:51:34.002401    5782 out.go:270] * 
	* 
	W1028 04:51:34.003985    5782 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:51:34.014230    5782 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.764286375s)

                                                
                                                
-- stdout --
	* [flannel-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-196000" primary control-plane node in "flannel-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:51:36.402202    5891 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:51:36.402360    5891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:51:36.402364    5891 out.go:358] Setting ErrFile to fd 2...
	I1028 04:51:36.402371    5891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:51:36.402521    5891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:51:36.403810    5891 out.go:352] Setting JSON to false
	I1028 04:51:36.422012    5891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4867,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:51:36.422087    5891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:51:36.427267    5891 out.go:177] * [flannel-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:51:36.435340    5891 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:51:36.435417    5891 notify.go:220] Checking for updates...
	I1028 04:51:36.441244    5891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:51:36.444361    5891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:51:36.448246    5891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:51:36.451269    5891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:51:36.454281    5891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:51:36.457669    5891 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:51:36.457745    5891 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:51:36.457797    5891 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:51:36.462204    5891 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:51:36.468210    5891 start.go:297] selected driver: qemu2
	I1028 04:51:36.468215    5891 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:51:36.468220    5891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:51:36.470555    5891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:51:36.474217    5891 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:51:36.477312    5891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:51:36.477331    5891 cni.go:84] Creating CNI manager for "flannel"
	I1028 04:51:36.477334    5891 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1028 04:51:36.477367    5891 start.go:340] cluster config:
	{Name:flannel-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:51:36.481716    5891 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:51:36.490242    5891 out.go:177] * Starting "flannel-196000" primary control-plane node in "flannel-196000" cluster
	I1028 04:51:36.494299    5891 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:51:36.494314    5891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:51:36.494322    5891 cache.go:56] Caching tarball of preloaded images
	I1028 04:51:36.494386    5891 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:51:36.494391    5891 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:51:36.494437    5891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/flannel-196000/config.json ...
	I1028 04:51:36.494447    5891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/flannel-196000/config.json: {Name:mk364298e0304c04dfcaf8d1ef5752b85a06c294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:51:36.494728    5891 start.go:360] acquireMachinesLock for flannel-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:36.494777    5891 start.go:364] duration metric: took 42.542µs to acquireMachinesLock for "flannel-196000"
	I1028 04:51:36.494788    5891 start.go:93] Provisioning new machine with config: &{Name:flannel-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:36.494824    5891 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:36.498298    5891 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:36.513232    5891 start.go:159] libmachine.API.Create for "flannel-196000" (driver="qemu2")
	I1028 04:51:36.513261    5891 client.go:168] LocalClient.Create starting
	I1028 04:51:36.513328    5891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:36.513366    5891 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:36.513376    5891 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:36.513411    5891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:36.513442    5891 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:36.513451    5891 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:36.513839    5891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:36.672270    5891 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:36.731273    5891 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:36.731280    5891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:36.731454    5891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2
	I1028 04:51:36.741386    5891 main.go:141] libmachine: STDOUT: 
	I1028 04:51:36.741412    5891 main.go:141] libmachine: STDERR: 
	I1028 04:51:36.741474    5891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2 +20000M
	I1028 04:51:36.750174    5891 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:36.750190    5891 main.go:141] libmachine: STDERR: 
	I1028 04:51:36.750205    5891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2
	I1028 04:51:36.750211    5891 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:36.750225    5891 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:36.750253    5891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c2:48:a2:e2:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2
	I1028 04:51:36.752117    5891 main.go:141] libmachine: STDOUT: 
	I1028 04:51:36.752131    5891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:36.752153    5891 client.go:171] duration metric: took 238.884625ms to LocalClient.Create
	I1028 04:51:38.754301    5891 start.go:128] duration metric: took 2.259454041s to createHost
	I1028 04:51:38.754353    5891 start.go:83] releasing machines lock for "flannel-196000", held for 2.259562083s
	W1028 04:51:38.754388    5891 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:38.764774    5891 out.go:177] * Deleting "flannel-196000" in qemu2 ...
	W1028 04:51:38.788188    5891 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:38.788200    5891 start.go:729] Will try again in 5 seconds ...
	I1028 04:51:43.790571    5891 start.go:360] acquireMachinesLock for flannel-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:43.791295    5891 start.go:364] duration metric: took 608.959µs to acquireMachinesLock for "flannel-196000"
	I1028 04:51:43.791370    5891 start.go:93] Provisioning new machine with config: &{Name:flannel-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:43.791663    5891 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:43.800329    5891 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:43.847759    5891 start.go:159] libmachine.API.Create for "flannel-196000" (driver="qemu2")
	I1028 04:51:43.847827    5891 client.go:168] LocalClient.Create starting
	I1028 04:51:43.847995    5891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:43.848084    5891 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:43.848103    5891 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:43.848162    5891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:43.848224    5891 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:43.848238    5891 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:43.848816    5891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:44.019312    5891 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:44.067906    5891 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:44.067912    5891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:44.068098    5891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2
	I1028 04:51:44.078112    5891 main.go:141] libmachine: STDOUT: 
	I1028 04:51:44.078137    5891 main.go:141] libmachine: STDERR: 
	I1028 04:51:44.078200    5891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2 +20000M
	I1028 04:51:44.087218    5891 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:44.087234    5891 main.go:141] libmachine: STDERR: 
	I1028 04:51:44.087252    5891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2
	I1028 04:51:44.087258    5891 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:44.087267    5891 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:44.087296    5891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:1e:7a:35:e5:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/flannel-196000/disk.qcow2
	I1028 04:51:44.089173    5891 main.go:141] libmachine: STDOUT: 
	I1028 04:51:44.089188    5891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:44.089201    5891 client.go:171] duration metric: took 241.366417ms to LocalClient.Create
	I1028 04:51:46.091436    5891 start.go:128] duration metric: took 2.299717542s to createHost
	I1028 04:51:46.091540    5891 start.go:83] releasing machines lock for "flannel-196000", held for 2.300209625s
	W1028 04:51:46.091941    5891 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:46.101603    5891 out.go:201] 
	W1028 04:51:46.107762    5891 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:51:46.107790    5891 out.go:270] * 
	* 
	W1028 04:51:46.110341    5891 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:51:46.119548    5891 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.843643666s)

                                                
                                                
-- stdout --
	* [bridge-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-196000" primary control-plane node in "bridge-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:51:48.763410    6008 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:51:48.763568    6008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:51:48.763572    6008 out.go:358] Setting ErrFile to fd 2...
	I1028 04:51:48.763574    6008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:51:48.763712    6008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:51:48.764891    6008 out.go:352] Setting JSON to false
	I1028 04:51:48.782923    6008 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4879,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:51:48.783005    6008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:51:48.788026    6008 out.go:177] * [bridge-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:51:48.802482    6008 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:51:48.802519    6008 notify.go:220] Checking for updates...
	I1028 04:51:48.809940    6008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:51:48.812915    6008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:51:48.816951    6008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:51:48.819973    6008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:51:48.822914    6008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:51:48.826336    6008 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:51:48.826407    6008 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:51:48.826454    6008 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:51:48.830861    6008 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:51:48.837912    6008 start.go:297] selected driver: qemu2
	I1028 04:51:48.837918    6008 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:51:48.837923    6008 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:51:48.840354    6008 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:51:48.843872    6008 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:51:48.847944    6008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:51:48.847964    6008 cni.go:84] Creating CNI manager for "bridge"
	I1028 04:51:48.847968    6008 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:51:48.847999    6008 start.go:340] cluster config:
	{Name:bridge-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:51:48.852365    6008 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:51:48.856862    6008 out.go:177] * Starting "bridge-196000" primary control-plane node in "bridge-196000" cluster
	I1028 04:51:48.864967    6008 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:51:48.864985    6008 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:51:48.864994    6008 cache.go:56] Caching tarball of preloaded images
	I1028 04:51:48.865072    6008 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:51:48.865077    6008 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:51:48.865142    6008 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/bridge-196000/config.json ...
	I1028 04:51:48.865158    6008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/bridge-196000/config.json: {Name:mkad6e62835e420e25c50f16b88d5da61abbbbbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:51:48.865392    6008 start.go:360] acquireMachinesLock for bridge-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:48.865436    6008 start.go:364] duration metric: took 39.041µs to acquireMachinesLock for "bridge-196000"
	I1028 04:51:48.865449    6008 start.go:93] Provisioning new machine with config: &{Name:bridge-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:48.865488    6008 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:48.868919    6008 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:48.884016    6008 start.go:159] libmachine.API.Create for "bridge-196000" (driver="qemu2")
	I1028 04:51:48.884041    6008 client.go:168] LocalClient.Create starting
	I1028 04:51:48.884110    6008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:48.884147    6008 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:48.884158    6008 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:48.884195    6008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:48.884224    6008 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:48.884232    6008 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:48.884604    6008 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:49.040527    6008 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:49.132708    6008 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:49.132731    6008 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:49.132940    6008 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2
	I1028 04:51:49.143107    6008 main.go:141] libmachine: STDOUT: 
	I1028 04:51:49.143130    6008 main.go:141] libmachine: STDERR: 
	I1028 04:51:49.143187    6008 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2 +20000M
	I1028 04:51:49.151755    6008 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:49.151770    6008 main.go:141] libmachine: STDERR: 
	I1028 04:51:49.151789    6008 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2
	I1028 04:51:49.151803    6008 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:49.151816    6008 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:49.151853    6008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:cd:24:44:5f:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2
	I1028 04:51:49.153699    6008 main.go:141] libmachine: STDOUT: 
	I1028 04:51:49.153713    6008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:49.153734    6008 client.go:171] duration metric: took 269.684292ms to LocalClient.Create
	I1028 04:51:51.155950    6008 start.go:128] duration metric: took 2.290425833s to createHost
	I1028 04:51:51.156107    6008 start.go:83] releasing machines lock for "bridge-196000", held for 2.290617041s
	W1028 04:51:51.156183    6008 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:51.166415    6008 out.go:177] * Deleting "bridge-196000" in qemu2 ...
	W1028 04:51:51.194257    6008 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:51.194286    6008 start.go:729] Will try again in 5 seconds ...
	I1028 04:51:56.195713    6008 start.go:360] acquireMachinesLock for bridge-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:51:56.196420    6008 start.go:364] duration metric: took 556.75µs to acquireMachinesLock for "bridge-196000"
	I1028 04:51:56.196623    6008 start.go:93] Provisioning new machine with config: &{Name:bridge-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:51:56.196880    6008 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:51:56.205392    6008 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:51:56.253720    6008 start.go:159] libmachine.API.Create for "bridge-196000" (driver="qemu2")
	I1028 04:51:56.253768    6008 client.go:168] LocalClient.Create starting
	I1028 04:51:56.253904    6008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:51:56.253990    6008 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:56.254011    6008 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:56.254086    6008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:51:56.254146    6008 main.go:141] libmachine: Decoding PEM data...
	I1028 04:51:56.254158    6008 main.go:141] libmachine: Parsing certificate...
	I1028 04:51:56.254843    6008 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:51:56.422316    6008 main.go:141] libmachine: Creating SSH key...
	I1028 04:51:56.503538    6008 main.go:141] libmachine: Creating Disk image...
	I1028 04:51:56.503547    6008 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:51:56.503776    6008 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2
	I1028 04:51:56.514592    6008 main.go:141] libmachine: STDOUT: 
	I1028 04:51:56.514682    6008 main.go:141] libmachine: STDERR: 
	I1028 04:51:56.514744    6008 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2 +20000M
	I1028 04:51:56.523920    6008 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:51:56.523960    6008 main.go:141] libmachine: STDERR: 
	I1028 04:51:56.523975    6008 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2
	I1028 04:51:56.523981    6008 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:51:56.523988    6008 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:51:56.524019    6008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:ae:0e:ab:d7:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/bridge-196000/disk.qcow2
	I1028 04:51:56.526203    6008 main.go:141] libmachine: STDOUT: 
	I1028 04:51:56.526220    6008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:51:56.526232    6008 client.go:171] duration metric: took 272.456167ms to LocalClient.Create
	I1028 04:51:58.527053    6008 start.go:128] duration metric: took 2.330138959s to createHost
	I1028 04:51:58.527098    6008 start.go:83] releasing machines lock for "bridge-196000", held for 2.330644583s
	W1028 04:51:58.527253    6008 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:51:58.541564    6008 out.go:201] 
	W1028 04:51:58.545431    6008 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:51:58.545446    6008 out.go:270] * 
	* 
	W1028 04:51:58.546260    6008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:51:58.556643    6008 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-196000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.070943375s)

                                                
                                                
-- stdout --
	* [kubenet-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-196000" primary control-plane node in "kubenet-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:00.919039    6119 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:00.919188    6119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:00.919191    6119 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:00.919194    6119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:00.919323    6119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:00.920463    6119 out.go:352] Setting JSON to false
	I1028 04:52:00.938311    6119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4891,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:52:00.938400    6119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:52:00.944648    6119 out.go:177] * [kubenet-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:52:00.951569    6119 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:52:00.951647    6119 notify.go:220] Checking for updates...
	I1028 04:52:00.958546    6119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:52:00.961582    6119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:52:00.965613    6119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:52:00.968488    6119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:52:00.972391    6119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:52:00.975925    6119 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:00.976003    6119 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:52:00.976044    6119 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:52:00.979576    6119 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:52:00.986567    6119 start.go:297] selected driver: qemu2
	I1028 04:52:00.986572    6119 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:52:00.986578    6119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:52:00.989015    6119 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:52:00.992556    6119 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:52:00.995627    6119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:52:00.995643    6119 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1028 04:52:00.995669    6119 start.go:340] cluster config:
	{Name:kubenet-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:00.999825    6119 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:01.007563    6119 out.go:177] * Starting "kubenet-196000" primary control-plane node in "kubenet-196000" cluster
	I1028 04:52:01.011584    6119 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:52:01.011596    6119 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:52:01.011604    6119 cache.go:56] Caching tarball of preloaded images
	I1028 04:52:01.011671    6119 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:52:01.011676    6119 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:52:01.011732    6119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/kubenet-196000/config.json ...
	I1028 04:52:01.011742    6119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/kubenet-196000/config.json: {Name:mk61d9cda52b6a376ca212437a553e67aee551bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:52:01.012024    6119 start.go:360] acquireMachinesLock for kubenet-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:01.012077    6119 start.go:364] duration metric: took 45.417µs to acquireMachinesLock for "kubenet-196000"
	I1028 04:52:01.012088    6119 start.go:93] Provisioning new machine with config: &{Name:kubenet-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:01.012110    6119 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:01.015510    6119 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:52:01.029781    6119 start.go:159] libmachine.API.Create for "kubenet-196000" (driver="qemu2")
	I1028 04:52:01.029807    6119 client.go:168] LocalClient.Create starting
	I1028 04:52:01.029875    6119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:01.029913    6119 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:01.029928    6119 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:01.029965    6119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:01.029993    6119 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:01.030005    6119 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:01.030349    6119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:01.187744    6119 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:01.488799    6119 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:01.488813    6119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:01.489083    6119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2
	I1028 04:52:01.500024    6119 main.go:141] libmachine: STDOUT: 
	I1028 04:52:01.500040    6119 main.go:141] libmachine: STDERR: 
	I1028 04:52:01.500116    6119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2 +20000M
	I1028 04:52:01.509084    6119 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:01.509098    6119 main.go:141] libmachine: STDERR: 
	I1028 04:52:01.509114    6119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2
	I1028 04:52:01.509126    6119 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:01.509141    6119 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:01.509168    6119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:82:3d:7f:f4:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2
	I1028 04:52:01.511070    6119 main.go:141] libmachine: STDOUT: 
	I1028 04:52:01.511087    6119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:01.511107    6119 client.go:171] duration metric: took 481.293334ms to LocalClient.Create
	I1028 04:52:03.513533    6119 start.go:128] duration metric: took 2.501348458s to createHost
	I1028 04:52:03.513637    6119 start.go:83] releasing machines lock for "kubenet-196000", held for 2.501539541s
	W1028 04:52:03.513694    6119 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:03.521003    6119 out.go:177] * Deleting "kubenet-196000" in qemu2 ...
	W1028 04:52:03.554819    6119 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:03.554852    6119 start.go:729] Will try again in 5 seconds ...
	I1028 04:52:08.557247    6119 start.go:360] acquireMachinesLock for kubenet-196000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:08.557893    6119 start.go:364] duration metric: took 551.541µs to acquireMachinesLock for "kubenet-196000"
	I1028 04:52:08.557977    6119 start.go:93] Provisioning new machine with config: &{Name:kubenet-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:08.558243    6119 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:08.568862    6119 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 04:52:08.619004    6119 start.go:159] libmachine.API.Create for "kubenet-196000" (driver="qemu2")
	I1028 04:52:08.619057    6119 client.go:168] LocalClient.Create starting
	I1028 04:52:08.619216    6119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:08.619324    6119 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:08.619341    6119 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:08.619404    6119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:08.619465    6119 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:08.619477    6119 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:08.620234    6119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:08.788270    6119 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:08.893287    6119 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:08.893299    6119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:08.893533    6119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2
	I1028 04:52:08.903752    6119 main.go:141] libmachine: STDOUT: 
	I1028 04:52:08.903777    6119 main.go:141] libmachine: STDERR: 
	I1028 04:52:08.903844    6119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2 +20000M
	I1028 04:52:08.912512    6119 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:08.912526    6119 main.go:141] libmachine: STDERR: 
	I1028 04:52:08.912541    6119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2
	I1028 04:52:08.912548    6119 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:08.912556    6119 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:08.912595    6119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:8d:3e:97:22:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/kubenet-196000/disk.qcow2
	I1028 04:52:08.914474    6119 main.go:141] libmachine: STDOUT: 
	I1028 04:52:08.914489    6119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:08.914504    6119 client.go:171] duration metric: took 295.435583ms to LocalClient.Create
	I1028 04:52:10.916648    6119 start.go:128] duration metric: took 2.358359125s to createHost
	I1028 04:52:10.916715    6119 start.go:83] releasing machines lock for "kubenet-196000", held for 2.358789083s
	W1028 04:52:10.916999    6119 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:10.926636    6119 out.go:201] 
	W1028 04:52:10.931723    6119 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:52:10.931745    6119 out.go:270] * 
	* 
	W1028 04:52:10.933569    6119 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:52:10.944649    6119 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-498000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-498000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.831292417s)

                                                
                                                
-- stdout --
	* [old-k8s-version-498000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-498000" primary control-plane node in "old-k8s-version-498000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-498000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:13.488123    6232 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:13.488293    6232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:13.488297    6232 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:13.488300    6232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:13.488411    6232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:13.489571    6232 out.go:352] Setting JSON to false
	I1028 04:52:13.507712    6232 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4904,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:52:13.507786    6232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:52:13.517317    6232 out.go:177] * [old-k8s-version-498000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:52:13.521400    6232 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:52:13.521451    6232 notify.go:220] Checking for updates...
	I1028 04:52:13.528361    6232 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:52:13.532359    6232 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:52:13.536371    6232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:52:13.539312    6232 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:52:13.542372    6232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:52:13.545799    6232 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:13.545888    6232 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:52:13.545946    6232 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:52:13.550396    6232 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:52:13.557215    6232 start.go:297] selected driver: qemu2
	I1028 04:52:13.557221    6232 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:52:13.557227    6232 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:52:13.559758    6232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:52:13.564367    6232 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:52:13.567399    6232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:52:13.567419    6232 cni.go:84] Creating CNI manager for ""
	I1028 04:52:13.567438    6232 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 04:52:13.567459    6232 start.go:340] cluster config:
	{Name:old-k8s-version-498000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:13.571802    6232 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:13.580362    6232 out.go:177] * Starting "old-k8s-version-498000" primary control-plane node in "old-k8s-version-498000" cluster
	I1028 04:52:13.584319    6232 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:52:13.584334    6232 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 04:52:13.584343    6232 cache.go:56] Caching tarball of preloaded images
	I1028 04:52:13.584422    6232 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:52:13.584434    6232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 04:52:13.584495    6232 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/old-k8s-version-498000/config.json ...
	I1028 04:52:13.584505    6232 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/old-k8s-version-498000/config.json: {Name:mkd44549a55a95cd53540c32d83d2cd8a11b1d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:52:13.584745    6232 start.go:360] acquireMachinesLock for old-k8s-version-498000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:13.584792    6232 start.go:364] duration metric: took 41.083µs to acquireMachinesLock for "old-k8s-version-498000"
	I1028 04:52:13.584803    6232 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:13.584834    6232 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:13.589394    6232 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:52:13.604342    6232 start.go:159] libmachine.API.Create for "old-k8s-version-498000" (driver="qemu2")
	I1028 04:52:13.604366    6232 client.go:168] LocalClient.Create starting
	I1028 04:52:13.604438    6232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:13.604478    6232 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:13.604491    6232 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:13.604532    6232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:13.604564    6232 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:13.604571    6232 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:13.604933    6232 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:13.761170    6232 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:13.871725    6232 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:13.871732    6232 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:13.871920    6232 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2
	I1028 04:52:13.881848    6232 main.go:141] libmachine: STDOUT: 
	I1028 04:52:13.881867    6232 main.go:141] libmachine: STDERR: 
	I1028 04:52:13.881923    6232 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2 +20000M
	I1028 04:52:13.890525    6232 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:13.890551    6232 main.go:141] libmachine: STDERR: 
	I1028 04:52:13.890565    6232 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2
	I1028 04:52:13.890570    6232 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:13.890582    6232 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:13.890617    6232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:f2:01:76:11:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2
	I1028 04:52:13.892454    6232 main.go:141] libmachine: STDOUT: 
	I1028 04:52:13.892468    6232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:13.892486    6232 client.go:171] duration metric: took 288.114917ms to LocalClient.Create
	I1028 04:52:15.894623    6232 start.go:128] duration metric: took 2.309765292s to createHost
	I1028 04:52:15.894676    6232 start.go:83] releasing machines lock for "old-k8s-version-498000", held for 2.309868625s
	W1028 04:52:15.894711    6232 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:15.903755    6232 out.go:177] * Deleting "old-k8s-version-498000" in qemu2 ...
	W1028 04:52:15.934182    6232 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:15.934199    6232 start.go:729] Will try again in 5 seconds ...
	I1028 04:52:20.936541    6232 start.go:360] acquireMachinesLock for old-k8s-version-498000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:20.937240    6232 start.go:364] duration metric: took 568.166µs to acquireMachinesLock for "old-k8s-version-498000"
	I1028 04:52:20.937417    6232 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:20.937689    6232 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:20.949392    6232 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:52:20.998579    6232 start.go:159] libmachine.API.Create for "old-k8s-version-498000" (driver="qemu2")
	I1028 04:52:20.998630    6232 client.go:168] LocalClient.Create starting
	I1028 04:52:20.998799    6232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:20.998908    6232 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:20.998926    6232 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:20.999001    6232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:20.999060    6232 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:20.999078    6232 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:20.999857    6232 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:21.164501    6232 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:21.220136    6232 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:21.220149    6232 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:21.220368    6232 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2
	I1028 04:52:21.231202    6232 main.go:141] libmachine: STDOUT: 
	I1028 04:52:21.231236    6232 main.go:141] libmachine: STDERR: 
	I1028 04:52:21.231300    6232 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2 +20000M
	I1028 04:52:21.240236    6232 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:21.240255    6232 main.go:141] libmachine: STDERR: 
	I1028 04:52:21.240269    6232 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2
	I1028 04:52:21.240275    6232 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:21.240293    6232 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:21.240323    6232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:31:56:99:14:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2
	I1028 04:52:21.242249    6232 main.go:141] libmachine: STDOUT: 
	I1028 04:52:21.242270    6232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:21.242285    6232 client.go:171] duration metric: took 243.647833ms to LocalClient.Create
	I1028 04:52:23.244508    6232 start.go:128] duration metric: took 2.306751334s to createHost
	I1028 04:52:23.244612    6232 start.go:83] releasing machines lock for "old-k8s-version-498000", held for 2.307337292s
	W1028 04:52:23.245153    6232 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-498000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-498000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:23.255734    6232 out.go:201] 
	W1028 04:52:23.260702    6232 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:52:23.260750    6232 out.go:270] * 
	* 
	W1028 04:52:23.263136    6232 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:52:23.271677    6232 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-498000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (69.086959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-498000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-498000 create -f testdata/busybox.yaml: exit status 1 (29.701041ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-498000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-498000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (34.261791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (33.585041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-498000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-498000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-498000 describe deploy/metrics-server -n kube-system: exit status 1 (27.785791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-498000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-498000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (34.584959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-498000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-498000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.181637s)

                                                
                                                
-- stdout --
	* [old-k8s-version-498000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-498000" primary control-plane node in "old-k8s-version-498000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:25.874295    6275 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:25.874454    6275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:25.874457    6275 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:25.874459    6275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:25.874592    6275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:25.875661    6275 out.go:352] Setting JSON to false
	I1028 04:52:25.893535    6275 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4916,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:52:25.893607    6275 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:52:25.898600    6275 out.go:177] * [old-k8s-version-498000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:52:25.906565    6275 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:52:25.906618    6275 notify.go:220] Checking for updates...
	I1028 04:52:25.913580    6275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:52:25.914865    6275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:52:25.917568    6275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:52:25.920634    6275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:52:25.924390    6275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:52:25.928982    6275 config.go:182] Loaded profile config "old-k8s-version-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 04:52:25.932591    6275 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 04:52:25.933796    6275 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:52:25.937601    6275 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:52:25.944427    6275 start.go:297] selected driver: qemu2
	I1028 04:52:25.944433    6275 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:25.944473    6275 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:52:25.946796    6275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:52:25.946819    6275 cni.go:84] Creating CNI manager for ""
	I1028 04:52:25.946838    6275 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 04:52:25.946878    6275 start.go:340] cluster config:
	{Name:old-k8s-version-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-498000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:25.951044    6275 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:25.959587    6275 out.go:177] * Starting "old-k8s-version-498000" primary control-plane node in "old-k8s-version-498000" cluster
	I1028 04:52:25.963617    6275 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:52:25.963634    6275 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 04:52:25.963643    6275 cache.go:56] Caching tarball of preloaded images
	I1028 04:52:25.963712    6275 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:52:25.963717    6275 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 04:52:25.963766    6275 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/old-k8s-version-498000/config.json ...
	I1028 04:52:25.964061    6275 start.go:360] acquireMachinesLock for old-k8s-version-498000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:25.964094    6275 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "old-k8s-version-498000"
	I1028 04:52:25.964102    6275 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:52:25.964107    6275 fix.go:54] fixHost starting: 
	I1028 04:52:25.964219    6275 fix.go:112] recreateIfNeeded on old-k8s-version-498000: state=Stopped err=<nil>
	W1028 04:52:25.964226    6275 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:52:25.968549    6275 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-498000" ...
	I1028 04:52:25.975667    6275 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:25.975707    6275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:31:56:99:14:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2
	I1028 04:52:25.977707    6275 main.go:141] libmachine: STDOUT: 
	I1028 04:52:25.977722    6275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:25.977752    6275 fix.go:56] duration metric: took 13.643959ms for fixHost
	I1028 04:52:25.977758    6275 start.go:83] releasing machines lock for "old-k8s-version-498000", held for 13.659583ms
	W1028 04:52:25.977762    6275 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:52:25.977808    6275 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:25.977811    6275 start.go:729] Will try again in 5 seconds ...
	I1028 04:52:30.979922    6275 start.go:360] acquireMachinesLock for old-k8s-version-498000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:30.980084    6275 start.go:364] duration metric: took 123.417µs to acquireMachinesLock for "old-k8s-version-498000"
	I1028 04:52:30.980135    6275 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:52:30.980139    6275 fix.go:54] fixHost starting: 
	I1028 04:52:30.980331    6275 fix.go:112] recreateIfNeeded on old-k8s-version-498000: state=Stopped err=<nil>
	W1028 04:52:30.980336    6275 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:52:30.986501    6275 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-498000" ...
	I1028 04:52:30.989517    6275 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:30.989624    6275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:31:56:99:14:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/old-k8s-version-498000/disk.qcow2
	I1028 04:52:30.992428    6275 main.go:141] libmachine: STDOUT: 
	I1028 04:52:30.992446    6275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:30.992464    6275 fix.go:56] duration metric: took 12.325541ms for fixHost
	I1028 04:52:30.992469    6275 start.go:83] releasing machines lock for "old-k8s-version-498000", held for 12.376083ms
	W1028 04:52:30.992522    6275 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:31.000521    6275 out.go:201] 
	W1028 04:52:31.004536    6275 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:52:31.004541    6275 out.go:270] * 
	* 
	W1028 04:52:31.005030    6275 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:52:31.015541    6275 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-498000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (34.225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-498000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (34.444542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-498000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-498000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-498000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.017292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-498000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-498000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (33.716917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-498000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (34.769375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-498000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-498000 --alsologtostderr -v=1: exit status 83 (44.978958ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-498000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:31.267872    6298 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:31.268830    6298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:31.268836    6298 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:31.268839    6298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:31.269022    6298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:31.269279    6298 out.go:352] Setting JSON to false
	I1028 04:52:31.269286    6298 mustload.go:65] Loading cluster: old-k8s-version-498000
	I1028 04:52:31.269523    6298 config.go:182] Loaded profile config "old-k8s-version-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 04:52:31.274028    6298 out.go:177] * The control-plane node old-k8s-version-498000 host is not running: state=Stopped
	I1028 04:52:31.277189    6298 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-498000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-498000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (33.988666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (34.150542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.723911583s)

                                                
                                                
-- stdout --
	* [no-preload-652000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-652000" primary control-plane node in "no-preload-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:31.609592    6315 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:31.609746    6315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:31.609749    6315 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:31.609752    6315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:31.609911    6315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:31.611124    6315 out.go:352] Setting JSON to false
	I1028 04:52:31.629042    6315 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4922,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:52:31.629157    6315 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:52:31.632591    6315 out.go:177] * [no-preload-652000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:52:31.639687    6315 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:52:31.639750    6315 notify.go:220] Checking for updates...
	I1028 04:52:31.645563    6315 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:52:31.648623    6315 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:52:31.651539    6315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:52:31.654613    6315 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:52:31.657596    6315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:52:31.659157    6315 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:31.659227    6315 config.go:182] Loaded profile config "stopped-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 04:52:31.659273    6315 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:52:31.663590    6315 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:52:31.670439    6315 start.go:297] selected driver: qemu2
	I1028 04:52:31.670446    6315 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:52:31.670451    6315 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:52:31.672920    6315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:52:31.675547    6315 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:52:31.678669    6315 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:52:31.678689    6315 cni.go:84] Creating CNI manager for ""
	I1028 04:52:31.678709    6315 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:52:31.678713    6315 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:52:31.678743    6315 start.go:340] cluster config:
	{Name:no-preload-652000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:31.683279    6315 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.691582    6315 out.go:177] * Starting "no-preload-652000" primary control-plane node in "no-preload-652000" cluster
	I1028 04:52:31.695551    6315 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:52:31.695608    6315 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/no-preload-652000/config.json ...
	I1028 04:52:31.695621    6315 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/no-preload-652000/config.json: {Name:mk9046572a0b9769c0ef88e9af7e725ad23fe10b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:52:31.695636    6315 cache.go:107] acquiring lock: {Name:mk5701025b57650ece916099107a039639a4ca7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.695648    6315 cache.go:107] acquiring lock: {Name:mk8f7fedd57339f55502801ee62a33ecabbf16cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.695730    6315 cache.go:115] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1028 04:52:31.695738    6315 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 93.333µs
	I1028 04:52:31.695745    6315 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1028 04:52:31.695751    6315 cache.go:107] acquiring lock: {Name:mk9a0ad12bc9e6c8ada7386b6e19fbcdbaf180ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.695798    6315 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 04:52:31.695792    6315 cache.go:107] acquiring lock: {Name:mk642f809c89df2ec877ebbd9221550378be8114 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.695802    6315 cache.go:107] acquiring lock: {Name:mk715c79d0c16109e82f4f6b27022e2e2a336418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.695947    6315 start.go:360] acquireMachinesLock for no-preload-652000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:31.695963    6315 cache.go:107] acquiring lock: {Name:mk94f29003034646beaea78b56ac24edc80b262f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.695965    6315 cache.go:107] acquiring lock: {Name:mkbd9d6a46cfe8ff62ea62292b3cd4c2a1aec27d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.696047    6315 start.go:364] duration metric: took 91.833µs to acquireMachinesLock for "no-preload-652000"
	I1028 04:52:31.696057    6315 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 04:52:31.696052    6315 cache.go:107] acquiring lock: {Name:mkb3fc96d244ce69d12d3023f49e43765c739001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:31.696091    6315 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 04:52:31.696103    6315 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 04:52:31.696113    6315 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 04:52:31.696060    6315 start.go:93] Provisioning new machine with config: &{Name:no-preload-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:31.696135    6315 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:31.696170    6315 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 04:52:31.696264    6315 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 04:52:31.699579    6315 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:52:31.705853    6315 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 04:52:31.705899    6315 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 04:52:31.705936    6315 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 04:52:31.706701    6315 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 04:52:31.707310    6315 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 04:52:31.707459    6315 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 04:52:31.707497    6315 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 04:52:31.715693    6315 start.go:159] libmachine.API.Create for "no-preload-652000" (driver="qemu2")
	I1028 04:52:31.715719    6315 client.go:168] LocalClient.Create starting
	I1028 04:52:31.715810    6315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:31.715847    6315 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:31.715864    6315 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:31.715899    6315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:31.715928    6315 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:31.715936    6315 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:31.716343    6315 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:31.880068    6315 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:31.908834    6315 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:31.908851    6315 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:31.909106    6315 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2
	I1028 04:52:31.919997    6315 main.go:141] libmachine: STDOUT: 
	I1028 04:52:31.920015    6315 main.go:141] libmachine: STDERR: 
	I1028 04:52:31.920073    6315 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2 +20000M
	I1028 04:52:31.929135    6315 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:31.929152    6315 main.go:141] libmachine: STDERR: 
	I1028 04:52:31.929165    6315 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2
	I1028 04:52:31.929171    6315 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:31.929184    6315 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:31.929218    6315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:df:ff:df:5a:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2
	I1028 04:52:31.931202    6315 main.go:141] libmachine: STDOUT: 
	I1028 04:52:31.931217    6315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:31.931236    6315 client.go:171] duration metric: took 215.508834ms to LocalClient.Create
	I1028 04:52:32.214161    6315 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 04:52:32.215054    6315 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 04:52:32.230386    6315 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1028 04:52:32.275593    6315 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1028 04:52:32.337611    6315 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 04:52:32.383340    6315 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 04:52:32.411261    6315 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1028 04:52:32.411270    6315 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 715.4965ms
	I1028 04:52:32.411277    6315 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1028 04:52:32.471909    6315 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 04:52:33.932459    6315 start.go:128] duration metric: took 2.236282833s to createHost
	I1028 04:52:33.932539    6315 start.go:83] releasing machines lock for "no-preload-652000", held for 2.236474458s
	W1028 04:52:33.932600    6315 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:33.946675    6315 out.go:177] * Deleting "no-preload-652000" in qemu2 ...
	W1028 04:52:33.971642    6315 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:33.971677    6315 start.go:729] Will try again in 5 seconds ...
	I1028 04:52:36.306272    6315 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1028 04:52:36.306290    6315 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.610339625s
	I1028 04:52:36.306298    6315 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1028 04:52:36.490175    6315 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1028 04:52:36.490190    6315 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 4.794309875s
	I1028 04:52:36.490197    6315 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1028 04:52:36.666077    6315 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1028 04:52:36.666089    6315 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 4.97033s
	I1028 04:52:36.666096    6315 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1028 04:52:36.905355    6315 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1028 04:52:36.905384    6315 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 5.209326375s
	I1028 04:52:36.905398    6315 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1028 04:52:37.538232    6315 cache.go:157] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1028 04:52:37.538250    6315 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 5.842595083s
	I1028 04:52:37.538260    6315 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1028 04:52:38.972228    6315 start.go:360] acquireMachinesLock for no-preload-652000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:38.972629    6315 start.go:364] duration metric: took 332.917µs to acquireMachinesLock for "no-preload-652000"
	I1028 04:52:38.972723    6315 start.go:93] Provisioning new machine with config: &{Name:no-preload-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:38.972881    6315 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:38.983382    6315 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:52:39.023373    6315 start.go:159] libmachine.API.Create for "no-preload-652000" (driver="qemu2")
	I1028 04:52:39.023439    6315 client.go:168] LocalClient.Create starting
	I1028 04:52:39.023639    6315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:39.023738    6315 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:39.023758    6315 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:39.023839    6315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:39.023890    6315 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:39.023912    6315 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:39.024464    6315 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:39.186245    6315 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:39.245788    6315 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:39.245801    6315 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:39.246007    6315 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2
	I1028 04:52:39.256182    6315 main.go:141] libmachine: STDOUT: 
	I1028 04:52:39.256219    6315 main.go:141] libmachine: STDERR: 
	I1028 04:52:39.256284    6315 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2 +20000M
	I1028 04:52:39.265210    6315 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:39.265230    6315 main.go:141] libmachine: STDERR: 
	I1028 04:52:39.265245    6315 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2
	I1028 04:52:39.265251    6315 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:39.265263    6315 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:39.265307    6315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:60:ca:c2:a1:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2
	I1028 04:52:39.267352    6315 main.go:141] libmachine: STDOUT: 
	I1028 04:52:39.267383    6315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:39.267396    6315 client.go:171] duration metric: took 243.951792ms to LocalClient.Create
	I1028 04:52:41.267821    6315 start.go:128] duration metric: took 2.294914042s to createHost
	I1028 04:52:41.267862    6315 start.go:83] releasing machines lock for "no-preload-652000", held for 2.295211291s
	W1028 04:52:41.268027    6315 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:41.275385    6315 out.go:201] 
	W1028 04:52:41.281452    6315 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:52:41.281470    6315 out.go:270] * 
	* 
	W1028 04:52:41.282502    6315 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:52:41.293275    6315 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (43.452959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-652000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-652000 create -f testdata/busybox.yaml: exit status 1 (27.980208ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-652000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-652000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (34.537667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (33.955583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-652000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-652000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-652000 describe deploy/metrics-server -n kube-system: exit status 1 (27.869334ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-652000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-652000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (33.050583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.240764666s)

                                                
                                                
-- stdout --
	* [no-preload-652000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-652000" primary control-plane node in "no-preload-652000" cluster
	* Restarting existing qemu2 VM for "no-preload-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:45.147087    6394 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:45.147245    6394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:45.147248    6394 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:45.147250    6394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:45.147384    6394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:45.148739    6394 out.go:352] Setting JSON to false
	I1028 04:52:45.170186    6394 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4936,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:52:45.170251    6394 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:52:45.173311    6394 out.go:177] * [no-preload-652000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:52:45.181365    6394 notify.go:220] Checking for updates...
	I1028 04:52:45.184343    6394 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:52:45.195281    6394 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:52:45.201294    6394 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:52:45.204219    6394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:52:45.207264    6394 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:52:45.211671    6394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:52:45.214627    6394 config.go:182] Loaded profile config "no-preload-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:45.214892    6394 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:52:45.221134    6394 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:52:45.232325    6394 start.go:297] selected driver: qemu2
	I1028 04:52:45.232332    6394 start.go:901] validating driver "qemu2" against &{Name:no-preload-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:45.232395    6394 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:52:45.234951    6394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:52:45.234979    6394 cni.go:84] Creating CNI manager for ""
	I1028 04:52:45.234999    6394 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:52:45.235023    6394 start.go:340] cluster config:
	{Name:no-preload-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-652000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:45.239514    6394 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.255303    6394 out.go:177] * Starting "no-preload-652000" primary control-plane node in "no-preload-652000" cluster
	I1028 04:52:45.267423    6394 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:52:45.267553    6394 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/no-preload-652000/config.json ...
	I1028 04:52:45.267586    6394 cache.go:107] acquiring lock: {Name:mk8f7fedd57339f55502801ee62a33ecabbf16cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.267592    6394 cache.go:107] acquiring lock: {Name:mkbd9d6a46cfe8ff62ea62292b3cd4c2a1aec27d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.267650    6394 cache.go:107] acquiring lock: {Name:mk715c79d0c16109e82f4f6b27022e2e2a336418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.267659    6394 cache.go:107] acquiring lock: {Name:mk642f809c89df2ec877ebbd9221550378be8114 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.267674    6394 cache.go:115] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1028 04:52:45.267684    6394 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.5µs
	I1028 04:52:45.267682    6394 cache.go:115] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1028 04:52:45.267690    6394 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1028 04:52:45.267693    6394 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 111.75µs
	I1028 04:52:45.267698    6394 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1028 04:52:45.267697    6394 cache.go:107] acquiring lock: {Name:mk94f29003034646beaea78b56ac24edc80b262f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.267763    6394 cache.go:115] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1028 04:52:45.267767    6394 cache.go:115] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1028 04:52:45.267772    6394 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 133µs
	I1028 04:52:45.267776    6394 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1028 04:52:45.267767    6394 cache.go:115] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1028 04:52:45.267781    6394 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 72µs
	I1028 04:52:45.267813    6394 cache.go:107] acquiring lock: {Name:mk5701025b57650ece916099107a039639a4ca7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.267823    6394 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1028 04:52:45.267788    6394 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 144.209µs
	I1028 04:52:45.267828    6394 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1028 04:52:45.267838    6394 cache.go:107] acquiring lock: {Name:mkb3fc96d244ce69d12d3023f49e43765c739001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.267797    6394 cache.go:107] acquiring lock: {Name:mk9a0ad12bc9e6c8ada7386b6e19fbcdbaf180ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.267890    6394 cache.go:115] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1028 04:52:45.267901    6394 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 118.625µs
	I1028 04:52:45.267905    6394 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1028 04:52:45.267911    6394 cache.go:115] /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1028 04:52:45.267914    6394 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 123.833µs
	I1028 04:52:45.267918    6394 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1028 04:52:45.267941    6394 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 04:52:45.275398    6394 start.go:360] acquireMachinesLock for no-preload-652000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:45.275453    6394 start.go:364] duration metric: took 47.792µs to acquireMachinesLock for "no-preload-652000"
	I1028 04:52:45.275462    6394 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:52:45.275465    6394 fix.go:54] fixHost starting: 
	I1028 04:52:45.275594    6394 fix.go:112] recreateIfNeeded on no-preload-652000: state=Stopped err=<nil>
	W1028 04:52:45.275602    6394 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:52:45.278791    6394 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 04:52:45.285224    6394 out.go:177] * Restarting existing qemu2 VM for "no-preload-652000" ...
	I1028 04:52:45.289310    6394 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:45.289375    6394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:60:ca:c2:a1:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2
	I1028 04:52:45.291482    6394 main.go:141] libmachine: STDOUT: 
	I1028 04:52:45.291504    6394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:45.291538    6394 fix.go:56] duration metric: took 16.069375ms for fixHost
	I1028 04:52:45.291544    6394 start.go:83] releasing machines lock for "no-preload-652000", held for 16.086042ms
	W1028 04:52:45.291552    6394 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:52:45.291591    6394 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:45.291595    6394 start.go:729] Will try again in 5 seconds ...
	I1028 04:52:45.711304    6394 cache.go:162] opening:  /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1028 04:52:50.291789    6394 start.go:360] acquireMachinesLock for no-preload-652000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:50.292243    6394 start.go:364] duration metric: took 388.75µs to acquireMachinesLock for "no-preload-652000"
	I1028 04:52:50.292355    6394 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:52:50.292378    6394 fix.go:54] fixHost starting: 
	I1028 04:52:50.293137    6394 fix.go:112] recreateIfNeeded on no-preload-652000: state=Stopped err=<nil>
	W1028 04:52:50.293166    6394 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:52:50.299590    6394 out.go:177] * Restarting existing qemu2 VM for "no-preload-652000" ...
	I1028 04:52:50.306312    6394 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:50.306558    6394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:60:ca:c2:a1:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/no-preload-652000/disk.qcow2
	I1028 04:52:50.318539    6394 main.go:141] libmachine: STDOUT: 
	I1028 04:52:50.318603    6394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:50.318683    6394 fix.go:56] duration metric: took 26.30875ms for fixHost
	I1028 04:52:50.318707    6394 start.go:83] releasing machines lock for "no-preload-652000", held for 26.43975ms
	W1028 04:52:50.318917    6394 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:50.326587    6394 out.go:201] 
	W1028 04:52:50.329667    6394 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:52:50.329691    6394 out.go:270] * 
	* 
	W1028 04:52:50.332641    6394 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:52:50.339618    6394 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (72.812209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-420000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-420000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.78097375s)

                                                
                                                
-- stdout --
	* [embed-certs-420000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-420000" primary control-plane node in "embed-certs-420000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-420000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:45.159598    6395 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:45.159756    6395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:45.159760    6395 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:45.159762    6395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:45.159897    6395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:45.161301    6395 out.go:352] Setting JSON to false
	I1028 04:52:45.179343    6395 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4936,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:52:45.179408    6395 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:52:45.184359    6395 out.go:177] * [embed-certs-420000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:52:45.191349    6395 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:52:45.191389    6395 notify.go:220] Checking for updates...
	I1028 04:52:45.198322    6395 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:52:45.201297    6395 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:52:45.204223    6395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:52:45.210341    6395 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:52:45.214270    6395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:52:45.217600    6395 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:45.217671    6395 config.go:182] Loaded profile config "no-preload-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:45.217727    6395 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:52:45.232325    6395 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:52:45.236313    6395 start.go:297] selected driver: qemu2
	I1028 04:52:45.236319    6395 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:52:45.236324    6395 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:52:45.238614    6395 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:52:45.243286    6395 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:52:45.255404    6395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:52:45.255433    6395 cni.go:84] Creating CNI manager for ""
	I1028 04:52:45.255456    6395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:52:45.255469    6395 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:52:45.255504    6395 start.go:340] cluster config:
	{Name:embed-certs-420000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:45.259904    6395 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:45.275281    6395 out.go:177] * Starting "embed-certs-420000" primary control-plane node in "embed-certs-420000" cluster
	I1028 04:52:45.278311    6395 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:52:45.278327    6395 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:52:45.278337    6395 cache.go:56] Caching tarball of preloaded images
	I1028 04:52:45.278416    6395 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:52:45.278422    6395 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:52:45.278487    6395 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/embed-certs-420000/config.json ...
	I1028 04:52:45.278499    6395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/embed-certs-420000/config.json: {Name:mk4e70fb91d5c1023ae0ba6c83d22be34da96906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:52:45.282691    6395 start.go:360] acquireMachinesLock for embed-certs-420000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:45.291569    6395 start.go:364] duration metric: took 8.866791ms to acquireMachinesLock for "embed-certs-420000"
	I1028 04:52:45.291604    6395 start.go:93] Provisioning new machine with config: &{Name:embed-certs-420000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:45.291645    6395 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:45.301515    6395 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:52:45.318677    6395 start.go:159] libmachine.API.Create for "embed-certs-420000" (driver="qemu2")
	I1028 04:52:45.318725    6395 client.go:168] LocalClient.Create starting
	I1028 04:52:45.318853    6395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:45.318899    6395 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:45.318910    6395 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:45.318961    6395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:45.318993    6395 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:45.319002    6395 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:45.319499    6395 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:45.477313    6395 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:45.540034    6395 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:45.540040    6395 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:45.540221    6395 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2
	I1028 04:52:45.549979    6395 main.go:141] libmachine: STDOUT: 
	I1028 04:52:45.549998    6395 main.go:141] libmachine: STDERR: 
	I1028 04:52:45.550067    6395 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2 +20000M
	I1028 04:52:45.558542    6395 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:45.558557    6395 main.go:141] libmachine: STDERR: 
	I1028 04:52:45.558574    6395 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2
	I1028 04:52:45.558578    6395 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:45.558590    6395 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:45.558619    6395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:92:7d:ef:9a:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2
	I1028 04:52:45.560390    6395 main.go:141] libmachine: STDOUT: 
	I1028 04:52:45.560404    6395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:45.560423    6395 client.go:171] duration metric: took 241.690792ms to LocalClient.Create
	I1028 04:52:47.562621    6395 start.go:128] duration metric: took 2.270943416s to createHost
	I1028 04:52:47.562684    6395 start.go:83] releasing machines lock for "embed-certs-420000", held for 2.271091792s
	W1028 04:52:47.562803    6395 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:47.577980    6395 out.go:177] * Deleting "embed-certs-420000" in qemu2 ...
	W1028 04:52:47.609776    6395 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:47.609802    6395 start.go:729] Will try again in 5 seconds ...
	I1028 04:52:52.610017    6395 start.go:360] acquireMachinesLock for embed-certs-420000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:53.520876    6395 start.go:364] duration metric: took 910.7425ms to acquireMachinesLock for "embed-certs-420000"
	I1028 04:52:53.521168    6395 start.go:93] Provisioning new machine with config: &{Name:embed-certs-420000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:53.521393    6395 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:53.534956    6395 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:52:53.582660    6395 start.go:159] libmachine.API.Create for "embed-certs-420000" (driver="qemu2")
	I1028 04:52:53.582706    6395 client.go:168] LocalClient.Create starting
	I1028 04:52:53.582881    6395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:53.582954    6395 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:53.582976    6395 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:53.583034    6395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:53.583094    6395 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:53.583120    6395 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:53.583759    6395 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:53.751731    6395 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:53.830260    6395 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:53.830266    6395 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:53.830452    6395 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2
	I1028 04:52:53.840376    6395 main.go:141] libmachine: STDOUT: 
	I1028 04:52:53.840402    6395 main.go:141] libmachine: STDERR: 
	I1028 04:52:53.840457    6395 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2 +20000M
	I1028 04:52:53.848865    6395 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:53.848880    6395 main.go:141] libmachine: STDERR: 
	I1028 04:52:53.848897    6395 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2
	I1028 04:52:53.848902    6395 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:53.848908    6395 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:53.848934    6395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:07:b3:9a:60:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2
	I1028 04:52:53.850715    6395 main.go:141] libmachine: STDOUT: 
	I1028 04:52:53.850731    6395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:53.850751    6395 client.go:171] duration metric: took 268.039083ms to LocalClient.Create
	I1028 04:52:55.852932    6395 start.go:128] duration metric: took 2.331476042s to createHost
	I1028 04:52:55.852979    6395 start.go:83] releasing machines lock for "embed-certs-420000", held for 2.332016166s
	W1028 04:52:55.853315    6395 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:55.868995    6395 out.go:201] 
	W1028 04:52:55.877919    6395 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:52:55.877959    6395 out.go:270] * 
	* 
	W1028 04:52:55.880521    6395 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:52:55.891859    6395 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-420000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (74.631416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-652000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (35.032458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-652000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-652000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-652000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.987ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-652000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-652000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (33.502833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-652000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (33.506625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-652000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-652000 --alsologtostderr -v=1: exit status 83 (43.934291ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-652000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:50.634273    6427 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:50.634453    6427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:50.634456    6427 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:50.634459    6427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:50.634597    6427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:50.634826    6427 out.go:352] Setting JSON to false
	I1028 04:52:50.634834    6427 mustload.go:65] Loading cluster: no-preload-652000
	I1028 04:52:50.635054    6427 config.go:182] Loaded profile config "no-preload-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:50.639898    6427 out.go:177] * The control-plane node no-preload-652000 host is not running: state=Stopped
	I1028 04:52:50.642802    6427 out.go:177]   To start a cluster, run: "minikube start -p no-preload-652000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-652000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (33.228042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (33.523708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.925930916s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-892000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-892000" primary control-plane node in "default-k8s-diff-port-892000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-892000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:52:51.088827    6451 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:52:51.088990    6451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:51.088995    6451 out.go:358] Setting ErrFile to fd 2...
	I1028 04:52:51.088997    6451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:52:51.089118    6451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:52:51.090293    6451 out.go:352] Setting JSON to false
	I1028 04:52:51.107868    6451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4942,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:52:51.107939    6451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:52:51.112938    6451 out.go:177] * [default-k8s-diff-port-892000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:52:51.119897    6451 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:52:51.119962    6451 notify.go:220] Checking for updates...
	I1028 04:52:51.127830    6451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:52:51.130842    6451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:52:51.132275    6451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:52:51.135840    6451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:52:51.138860    6451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:52:51.142243    6451 config.go:182] Loaded profile config "embed-certs-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:51.142307    6451 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:52:51.142356    6451 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:52:51.146778    6451 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:52:51.153891    6451 start.go:297] selected driver: qemu2
	I1028 04:52:51.153899    6451 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:52:51.153908    6451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:52:51.156499    6451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:52:51.160841    6451 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:52:51.163975    6451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:52:51.164004    6451 cni.go:84] Creating CNI manager for ""
	I1028 04:52:51.164028    6451 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:52:51.164034    6451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:52:51.164076    6451 start.go:340] cluster config:
	{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:52:51.168827    6451 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:52:51.175859    6451 out.go:177] * Starting "default-k8s-diff-port-892000" primary control-plane node in "default-k8s-diff-port-892000" cluster
	I1028 04:52:51.179867    6451 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:52:51.179885    6451 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:52:51.179901    6451 cache.go:56] Caching tarball of preloaded images
	I1028 04:52:51.179986    6451 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:52:51.179992    6451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:52:51.180058    6451 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/default-k8s-diff-port-892000/config.json ...
	I1028 04:52:51.180070    6451 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/default-k8s-diff-port-892000/config.json: {Name:mk77ae952fe815f25c306f10ebd29bd60321cc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:52:51.180335    6451 start.go:360] acquireMachinesLock for default-k8s-diff-port-892000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:51.180388    6451 start.go:364] duration metric: took 44.916µs to acquireMachinesLock for "default-k8s-diff-port-892000"
	I1028 04:52:51.180402    6451 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:51.180436    6451 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:51.188843    6451 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:52:51.207426    6451 start.go:159] libmachine.API.Create for "default-k8s-diff-port-892000" (driver="qemu2")
	I1028 04:52:51.207457    6451 client.go:168] LocalClient.Create starting
	I1028 04:52:51.207533    6451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:51.207571    6451 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:51.207585    6451 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:51.207624    6451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:51.207654    6451 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:51.207662    6451 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:51.208087    6451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:51.366840    6451 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:51.497573    6451 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:51.497579    6451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:51.497792    6451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I1028 04:52:51.508004    6451 main.go:141] libmachine: STDOUT: 
	I1028 04:52:51.508019    6451 main.go:141] libmachine: STDERR: 
	I1028 04:52:51.508071    6451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2 +20000M
	I1028 04:52:51.516486    6451 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:51.516506    6451 main.go:141] libmachine: STDERR: 
	I1028 04:52:51.516525    6451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I1028 04:52:51.516531    6451 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:51.516542    6451 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:51.516570    6451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:90:77:67:a9:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I1028 04:52:51.518318    6451 main.go:141] libmachine: STDOUT: 
	I1028 04:52:51.518331    6451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:51.518348    6451 client.go:171] duration metric: took 310.8845ms to LocalClient.Create
	I1028 04:52:53.520537    6451 start.go:128] duration metric: took 2.34007725s to createHost
	I1028 04:52:53.520736    6451 start.go:83] releasing machines lock for "default-k8s-diff-port-892000", held for 2.340200542s
	W1028 04:52:53.520791    6451 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:53.544985    6451 out.go:177] * Deleting "default-k8s-diff-port-892000" in qemu2 ...
	W1028 04:52:53.567522    6451 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:52:53.567540    6451 start.go:729] Will try again in 5 seconds ...
	I1028 04:52:58.569778    6451 start.go:360] acquireMachinesLock for default-k8s-diff-port-892000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:52:58.570361    6451 start.go:364] duration metric: took 478.709µs to acquireMachinesLock for "default-k8s-diff-port-892000"
	I1028 04:52:58.570459    6451 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:52:58.570717    6451 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:52:58.580367    6451 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:52:58.629372    6451 start.go:159] libmachine.API.Create for "default-k8s-diff-port-892000" (driver="qemu2")
	I1028 04:52:58.629441    6451 client.go:168] LocalClient.Create starting
	I1028 04:52:58.629604    6451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:52:58.629684    6451 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:58.629708    6451 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:58.629792    6451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:52:58.629831    6451 main.go:141] libmachine: Decoding PEM data...
	I1028 04:52:58.629844    6451 main.go:141] libmachine: Parsing certificate...
	I1028 04:52:58.630445    6451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:52:58.800019    6451 main.go:141] libmachine: Creating SSH key...
	I1028 04:52:58.912398    6451 main.go:141] libmachine: Creating Disk image...
	I1028 04:52:58.912404    6451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:52:58.912581    6451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I1028 04:52:58.922614    6451 main.go:141] libmachine: STDOUT: 
	I1028 04:52:58.922632    6451 main.go:141] libmachine: STDERR: 
	I1028 04:52:58.922687    6451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2 +20000M
	I1028 04:52:58.931166    6451 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:52:58.931184    6451 main.go:141] libmachine: STDERR: 
	I1028 04:52:58.931197    6451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I1028 04:52:58.931202    6451 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:52:58.931213    6451 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:52:58.931244    6451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:41:b2:4e:5b:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I1028 04:52:58.933002    6451 main.go:141] libmachine: STDOUT: 
	I1028 04:52:58.933016    6451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:52:58.933029    6451 client.go:171] duration metric: took 303.581458ms to LocalClient.Create
	I1028 04:53:00.935236    6451 start.go:128] duration metric: took 2.364466833s to createHost
	I1028 04:53:00.935325    6451 start.go:83] releasing machines lock for "default-k8s-diff-port-892000", held for 2.364911583s
	W1028 04:53:00.935637    6451 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-892000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-892000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:00.947168    6451 out.go:201] 
	W1028 04:53:00.955343    6451 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:53:00.955377    6451 out.go:270] * 
	* 
	W1028 04:53:00.958028    6451 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:53:00.966247    6451 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (70.372792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-420000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-420000 create -f testdata/busybox.yaml: exit status 1 (29.595084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-420000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-420000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (33.795667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (33.339292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-420000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-420000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-420000 describe deploy/metrics-server -n kube-system: exit status 1 (27.219416ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-420000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-420000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (33.63475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-420000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-420000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (6.049176667s)

                                                
                                                
-- stdout --
	* [embed-certs-420000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-420000" primary control-plane node in "embed-certs-420000" cluster
	* Restarting existing qemu2 VM for "embed-certs-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:53:00.014336    6503 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:53:00.014476    6503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:00.014480    6503 out.go:358] Setting ErrFile to fd 2...
	I1028 04:53:00.014482    6503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:00.014618    6503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:53:00.015705    6503 out.go:352] Setting JSON to false
	I1028 04:53:00.033296    6503 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4951,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:53:00.033375    6503 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:53:00.037121    6503 out.go:177] * [embed-certs-420000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:53:00.044105    6503 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:53:00.044178    6503 notify.go:220] Checking for updates...
	I1028 04:53:00.052080    6503 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:53:00.055064    6503 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:53:00.058086    6503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:53:00.061102    6503 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:53:00.064089    6503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:53:00.067449    6503 config.go:182] Loaded profile config "embed-certs-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:53:00.067729    6503 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:53:00.072092    6503 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:53:00.079127    6503 start.go:297] selected driver: qemu2
	I1028 04:53:00.079133    6503 start.go:901] validating driver "qemu2" against &{Name:embed-certs-420000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:53:00.079184    6503 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:53:00.081786    6503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:53:00.081864    6503 cni.go:84] Creating CNI manager for ""
	I1028 04:53:00.081885    6503 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:53:00.081912    6503 start.go:340] cluster config:
	{Name:embed-certs-420000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-420000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:53:00.086505    6503 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:53:00.095083    6503 out.go:177] * Starting "embed-certs-420000" primary control-plane node in "embed-certs-420000" cluster
	I1028 04:53:00.099115    6503 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:53:00.099138    6503 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:53:00.099149    6503 cache.go:56] Caching tarball of preloaded images
	I1028 04:53:00.099247    6503 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:53:00.099258    6503 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:53:00.099313    6503 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/embed-certs-420000/config.json ...
	I1028 04:53:00.099791    6503 start.go:360] acquireMachinesLock for embed-certs-420000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:53:00.935447    6503 start.go:364] duration metric: took 835.623958ms to acquireMachinesLock for "embed-certs-420000"
	I1028 04:53:00.935598    6503 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:53:00.935653    6503 fix.go:54] fixHost starting: 
	I1028 04:53:00.936351    6503 fix.go:112] recreateIfNeeded on embed-certs-420000: state=Stopped err=<nil>
	W1028 04:53:00.936396    6503 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:53:00.951174    6503 out.go:177] * Restarting existing qemu2 VM for "embed-certs-420000" ...
	I1028 04:53:00.959291    6503 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:53:00.959536    6503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:07:b3:9a:60:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2
	I1028 04:53:00.970698    6503 main.go:141] libmachine: STDOUT: 
	I1028 04:53:00.970767    6503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:53:00.970900    6503 fix.go:56] duration metric: took 35.273292ms for fixHost
	I1028 04:53:00.970925    6503 start.go:83] releasing machines lock for "embed-certs-420000", held for 35.450083ms
	W1028 04:53:00.970955    6503 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:53:00.971135    6503 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:00.971150    6503 start.go:729] Will try again in 5 seconds ...
	I1028 04:53:05.973432    6503 start.go:360] acquireMachinesLock for embed-certs-420000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:53:05.974012    6503 start.go:364] duration metric: took 458.333µs to acquireMachinesLock for "embed-certs-420000"
	I1028 04:53:05.974134    6503 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:53:05.974155    6503 fix.go:54] fixHost starting: 
	I1028 04:53:05.974922    6503 fix.go:112] recreateIfNeeded on embed-certs-420000: state=Stopped err=<nil>
	W1028 04:53:05.974950    6503 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:53:05.980530    6503 out.go:177] * Restarting existing qemu2 VM for "embed-certs-420000" ...
	I1028 04:53:05.984520    6503 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:53:05.984719    6503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:07:b3:9a:60:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/embed-certs-420000/disk.qcow2
	I1028 04:53:05.995244    6503 main.go:141] libmachine: STDOUT: 
	I1028 04:53:05.995321    6503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:53:05.995409    6503 fix.go:56] duration metric: took 21.257916ms for fixHost
	I1028 04:53:05.995428    6503 start.go:83] releasing machines lock for "embed-certs-420000", held for 21.391791ms
	W1028 04:53:05.995602    6503 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-420000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-420000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:06.003436    6503 out.go:201] 
	W1028 04:53:06.007537    6503 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:53:06.007565    6503 out.go:270] * 
	* 
	W1028 04:53:06.009924    6503 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:53:06.018451    6503 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-420000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (70.06975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892000 create -f testdata/busybox.yaml: exit status 1 (29.107042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-892000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-892000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (33.844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (33.072583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-892000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892000 describe deploy/metrics-server -n kube-system: exit status 1 (27.256875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-892000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-892000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (33.633834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.207609833s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-892000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-892000" primary control-plane node in "default-k8s-diff-port-892000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-892000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-892000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:53:05.029323    6544 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:53:05.029471    6544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:05.029474    6544 out.go:358] Setting ErrFile to fd 2...
	I1028 04:53:05.029476    6544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:05.029601    6544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:53:05.030675    6544 out.go:352] Setting JSON to false
	I1028 04:53:05.048517    6544 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4956,"bootTime":1730111429,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:53:05.048580    6544 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:53:05.052406    6544 out.go:177] * [default-k8s-diff-port-892000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:53:05.061260    6544 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:53:05.061318    6544 notify.go:220] Checking for updates...
	I1028 04:53:05.068249    6544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:53:05.071282    6544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:53:05.074258    6544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:53:05.077338    6544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:53:05.080278    6544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:53:05.081973    6544 config.go:182] Loaded profile config "default-k8s-diff-port-892000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:53:05.082246    6544 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:53:05.086244    6544 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:53:05.093156    6544 start.go:297] selected driver: qemu2
	I1028 04:53:05.093164    6544 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:53:05.093215    6544 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:53:05.095824    6544 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:53:05.095859    6544 cni.go:84] Creating CNI manager for ""
	I1028 04:53:05.095889    6544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:53:05.095914    6544 start.go:340] cluster config:
	{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-892000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:53:05.100671    6544 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:53:05.109297    6544 out.go:177] * Starting "default-k8s-diff-port-892000" primary control-plane node in "default-k8s-diff-port-892000" cluster
	I1028 04:53:05.113258    6544 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:53:05.113274    6544 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:53:05.113284    6544 cache.go:56] Caching tarball of preloaded images
	I1028 04:53:05.113347    6544 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:53:05.113354    6544 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:53:05.113421    6544 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/default-k8s-diff-port-892000/config.json ...
	I1028 04:53:05.113901    6544 start.go:360] acquireMachinesLock for default-k8s-diff-port-892000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:53:05.113929    6544 start.go:364] duration metric: took 22.667µs to acquireMachinesLock for "default-k8s-diff-port-892000"
	I1028 04:53:05.113938    6544 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:53:05.113943    6544 fix.go:54] fixHost starting: 
	I1028 04:53:05.114062    6544 fix.go:112] recreateIfNeeded on default-k8s-diff-port-892000: state=Stopped err=<nil>
	W1028 04:53:05.114070    6544 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:53:05.118153    6544 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-892000" ...
	I1028 04:53:05.126218    6544 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:53:05.126251    6544 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:41:b2:4e:5b:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I1028 04:53:05.128424    6544 main.go:141] libmachine: STDOUT: 
	I1028 04:53:05.128442    6544 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:53:05.128473    6544 fix.go:56] duration metric: took 14.52825ms for fixHost
	I1028 04:53:05.128478    6544 start.go:83] releasing machines lock for "default-k8s-diff-port-892000", held for 14.543792ms
	W1028 04:53:05.128496    6544 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:53:05.128534    6544 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:05.128538    6544 start.go:729] Will try again in 5 seconds ...
	I1028 04:53:10.130788    6544 start.go:360] acquireMachinesLock for default-k8s-diff-port-892000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:53:10.131295    6544 start.go:364] duration metric: took 397.167µs to acquireMachinesLock for "default-k8s-diff-port-892000"
	I1028 04:53:10.131403    6544 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:53:10.131428    6544 fix.go:54] fixHost starting: 
	I1028 04:53:10.132214    6544 fix.go:112] recreateIfNeeded on default-k8s-diff-port-892000: state=Stopped err=<nil>
	W1028 04:53:10.132240    6544 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:53:10.150911    6544 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-892000" ...
	I1028 04:53:10.155761    6544 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:53:10.156076    6544 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:41:b2:4e:5b:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I1028 04:53:10.166442    6544 main.go:141] libmachine: STDOUT: 
	I1028 04:53:10.166492    6544 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:53:10.166592    6544 fix.go:56] duration metric: took 35.168041ms for fixHost
	I1028 04:53:10.166617    6544 start.go:83] releasing machines lock for "default-k8s-diff-port-892000", held for 35.298125ms
	W1028 04:53:10.166814    6544 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-892000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-892000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:10.175772    6544 out.go:201] 
	W1028 04:53:10.178812    6544 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:53:10.178856    6544 out.go:270] * 
	* 
	W1028 04:53:10.181470    6544 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:53:10.190709    6544 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (71.209375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-420000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (35.0905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-420000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-420000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-420000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.371084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-420000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-420000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (33.483958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-420000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (33.017542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-420000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-420000 --alsologtostderr -v=1: exit status 83 (45.505208ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-420000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-420000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:53:06.309443    6563 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:53:06.309627    6563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:06.309631    6563 out.go:358] Setting ErrFile to fd 2...
	I1028 04:53:06.309633    6563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:06.309742    6563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:53:06.309984    6563 out.go:352] Setting JSON to false
	I1028 04:53:06.309993    6563 mustload.go:65] Loading cluster: embed-certs-420000
	I1028 04:53:06.310225    6563 config.go:182] Loaded profile config "embed-certs-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:53:06.314675    6563 out.go:177] * The control-plane node embed-certs-420000 host is not running: state=Stopped
	I1028 04:53:06.318751    6563 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-420000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-420000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (33.047208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (33.611292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-800000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-800000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.99739675s)

                                                
                                                
-- stdout --
	* [newest-cni-800000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-800000" primary control-plane node in "newest-cni-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:53:06.651168    6580 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:53:06.651335    6580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:06.651338    6580 out.go:358] Setting ErrFile to fd 2...
	I1028 04:53:06.651340    6580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:06.651461    6580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:53:06.652637    6580 out.go:352] Setting JSON to false
	I1028 04:53:06.670075    6580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4957,"bootTime":1730111429,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:53:06.670148    6580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:53:06.674743    6580 out.go:177] * [newest-cni-800000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:53:06.681740    6580 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:53:06.681815    6580 notify.go:220] Checking for updates...
	I1028 04:53:06.686949    6580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:53:06.689794    6580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:53:06.692747    6580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:53:06.695757    6580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:53:06.698691    6580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:53:06.702110    6580 config.go:182] Loaded profile config "default-k8s-diff-port-892000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:53:06.702168    6580 config.go:182] Loaded profile config "multinode-677000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:53:06.702212    6580 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:53:06.706718    6580 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:53:06.713709    6580 start.go:297] selected driver: qemu2
	I1028 04:53:06.713716    6580 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:53:06.713721    6580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:53:06.716066    6580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1028 04:53:06.716104    6580 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1028 04:53:06.723681    6580 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:53:06.726819    6580 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1028 04:53:06.726838    6580 cni.go:84] Creating CNI manager for ""
	I1028 04:53:06.726862    6580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:53:06.726866    6580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:53:06.726902    6580 start.go:340] cluster config:
	{Name:newest-cni-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:53:06.731662    6580 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:53:06.740672    6580 out.go:177] * Starting "newest-cni-800000" primary control-plane node in "newest-cni-800000" cluster
	I1028 04:53:06.744726    6580 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:53:06.744746    6580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:53:06.744752    6580 cache.go:56] Caching tarball of preloaded images
	I1028 04:53:06.744833    6580 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:53:06.744839    6580 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:53:06.744907    6580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/newest-cni-800000/config.json ...
	I1028 04:53:06.744918    6580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/newest-cni-800000/config.json: {Name:mk4b3c112f365a24a5b7b925c274a77e38bfe17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:53:06.745176    6580 start.go:360] acquireMachinesLock for newest-cni-800000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:53:06.745224    6580 start.go:364] duration metric: took 41.75µs to acquireMachinesLock for "newest-cni-800000"
	I1028 04:53:06.745237    6580 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:53:06.745278    6580 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:53:06.749743    6580 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:53:06.765814    6580 start.go:159] libmachine.API.Create for "newest-cni-800000" (driver="qemu2")
	I1028 04:53:06.765842    6580 client.go:168] LocalClient.Create starting
	I1028 04:53:06.765904    6580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:53:06.765940    6580 main.go:141] libmachine: Decoding PEM data...
	I1028 04:53:06.765950    6580 main.go:141] libmachine: Parsing certificate...
	I1028 04:53:06.765987    6580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:53:06.766019    6580 main.go:141] libmachine: Decoding PEM data...
	I1028 04:53:06.766026    6580 main.go:141] libmachine: Parsing certificate...
	I1028 04:53:06.766383    6580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:53:06.924151    6580 main.go:141] libmachine: Creating SSH key...
	I1028 04:53:07.022133    6580 main.go:141] libmachine: Creating Disk image...
	I1028 04:53:07.022138    6580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:53:07.022341    6580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2
	I1028 04:53:07.032315    6580 main.go:141] libmachine: STDOUT: 
	I1028 04:53:07.032334    6580 main.go:141] libmachine: STDERR: 
	I1028 04:53:07.032401    6580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2 +20000M
	I1028 04:53:07.040816    6580 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:53:07.040832    6580 main.go:141] libmachine: STDERR: 
	I1028 04:53:07.040846    6580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2
	I1028 04:53:07.040851    6580 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:53:07.040864    6580 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:53:07.040896    6580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b0:75:17:c7:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2
	I1028 04:53:07.042663    6580 main.go:141] libmachine: STDOUT: 
	I1028 04:53:07.042680    6580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:53:07.042700    6580 client.go:171] duration metric: took 276.850417ms to LocalClient.Create
	I1028 04:53:09.044914    6580 start.go:128] duration metric: took 2.299609166s to createHost
	I1028 04:53:09.044972    6580 start.go:83] releasing machines lock for "newest-cni-800000", held for 2.299729208s
	W1028 04:53:09.045054    6580 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:09.060194    6580 out.go:177] * Deleting "newest-cni-800000" in qemu2 ...
	W1028 04:53:09.085831    6580 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:09.085890    6580 start.go:729] Will try again in 5 seconds ...
	I1028 04:53:14.088111    6580 start.go:360] acquireMachinesLock for newest-cni-800000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:53:14.088662    6580 start.go:364] duration metric: took 421.5µs to acquireMachinesLock for "newest-cni-800000"
	I1028 04:53:14.088821    6580 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:53:14.089158    6580 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:53:14.093747    6580 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:53:14.139789    6580 start.go:159] libmachine.API.Create for "newest-cni-800000" (driver="qemu2")
	I1028 04:53:14.139842    6580 client.go:168] LocalClient.Create starting
	I1028 04:53:14.140022    6580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/ca.pem
	I1028 04:53:14.140101    6580 main.go:141] libmachine: Decoding PEM data...
	I1028 04:53:14.140119    6580 main.go:141] libmachine: Parsing certificate...
	I1028 04:53:14.140182    6580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19876-1087/.minikube/certs/cert.pem
	I1028 04:53:14.140244    6580 main.go:141] libmachine: Decoding PEM data...
	I1028 04:53:14.140258    6580 main.go:141] libmachine: Parsing certificate...
	I1028 04:53:14.140828    6580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:53:14.307834    6580 main.go:141] libmachine: Creating SSH key...
	I1028 04:53:14.546144    6580 main.go:141] libmachine: Creating Disk image...
	I1028 04:53:14.546159    6580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:53:14.546356    6580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2
	I1028 04:53:14.556358    6580 main.go:141] libmachine: STDOUT: 
	I1028 04:53:14.556376    6580 main.go:141] libmachine: STDERR: 
	I1028 04:53:14.556428    6580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2 +20000M
	I1028 04:53:14.565071    6580 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:53:14.565097    6580 main.go:141] libmachine: STDERR: 
	I1028 04:53:14.565112    6580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2
	I1028 04:53:14.565116    6580 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:53:14.565125    6580 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:53:14.565164    6580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:6d:17:0a:e5:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2
	I1028 04:53:14.566974    6580 main.go:141] libmachine: STDOUT: 
	I1028 04:53:14.566988    6580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:53:14.567003    6580 client.go:171] duration metric: took 427.153334ms to LocalClient.Create
	I1028 04:53:16.569191    6580 start.go:128] duration metric: took 2.479993375s to createHost
	I1028 04:53:16.569245    6580 start.go:83] releasing machines lock for "newest-cni-800000", held for 2.480547667s
	W1028 04:53:16.569652    6580 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:16.584303    6580 out.go:201] 
	W1028 04:53:16.587506    6580 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:53:16.587563    6580 out.go:270] * 
	* 
	W1028 04:53:16.590087    6580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:53:16.604308    6580 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-800000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000: exit status 7 (73.684667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-800000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-892000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (35.442125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-892000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.618375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-892000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-892000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (32.975667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-892000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (33.560333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-892000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-892000 --alsologtostderr -v=1: exit status 83 (41.920125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-892000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-892000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:53:10.481060    6602 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:53:10.481236    6602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:10.481240    6602 out.go:358] Setting ErrFile to fd 2...
	I1028 04:53:10.481242    6602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:10.481372    6602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:53:10.481574    6602 out.go:352] Setting JSON to false
	I1028 04:53:10.481583    6602 mustload.go:65] Loading cluster: default-k8s-diff-port-892000
	I1028 04:53:10.481816    6602 config.go:182] Loaded profile config "default-k8s-diff-port-892000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:53:10.485759    6602 out.go:177] * The control-plane node default-k8s-diff-port-892000 host is not running: state=Stopped
	I1028 04:53:10.489680    6602 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-892000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-892000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (32.475709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (32.912416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-800000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-800000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.187721084s)

                                                
                                                
-- stdout --
	* [newest-cni-800000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-800000" primary control-plane node in "newest-cni-800000" cluster
	* Restarting existing qemu2 VM for "newest-cni-800000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-800000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:53:20.378680    6653 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:53:20.378840    6653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:20.378844    6653 out.go:358] Setting ErrFile to fd 2...
	I1028 04:53:20.378846    6653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:20.378965    6653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:53:20.380285    6653 out.go:352] Setting JSON to false
	I1028 04:53:20.398075    6653 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4971,"bootTime":1730111429,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:53:20.398158    6653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:53:20.401959    6653 out.go:177] * [newest-cni-800000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:53:20.408855    6653 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 04:53:20.408898    6653 notify.go:220] Checking for updates...
	I1028 04:53:20.414832    6653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 04:53:20.417836    6653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:53:20.420904    6653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:53:20.423764    6653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 04:53:20.426812    6653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:53:20.430220    6653 config.go:182] Loaded profile config "newest-cni-800000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:53:20.430493    6653 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:53:20.433732    6653 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:53:20.440841    6653 start.go:297] selected driver: qemu2
	I1028 04:53:20.440849    6653 start.go:901] validating driver "qemu2" against &{Name:newest-cni-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:53:20.440915    6653 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:53:20.443514    6653 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1028 04:53:20.443541    6653 cni.go:84] Creating CNI manager for ""
	I1028 04:53:20.443572    6653 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:53:20.443594    6653 start.go:340] cluster config:
	{Name:newest-cni-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-800000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:53:20.448119    6653 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:53:20.455823    6653 out.go:177] * Starting "newest-cni-800000" primary control-plane node in "newest-cni-800000" cluster
	I1028 04:53:20.458771    6653 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:53:20.458788    6653 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:53:20.458800    6653 cache.go:56] Caching tarball of preloaded images
	I1028 04:53:20.458876    6653 preload.go:172] Found /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:53:20.458889    6653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:53:20.458954    6653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/newest-cni-800000/config.json ...
	I1028 04:53:20.459423    6653 start.go:360] acquireMachinesLock for newest-cni-800000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:53:20.459451    6653 start.go:364] duration metric: took 23µs to acquireMachinesLock for "newest-cni-800000"
	I1028 04:53:20.459460    6653 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:53:20.459465    6653 fix.go:54] fixHost starting: 
	I1028 04:53:20.459576    6653 fix.go:112] recreateIfNeeded on newest-cni-800000: state=Stopped err=<nil>
	W1028 04:53:20.459583    6653 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:53:20.463821    6653 out.go:177] * Restarting existing qemu2 VM for "newest-cni-800000" ...
	I1028 04:53:20.471800    6653 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:53:20.471840    6653 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:6d:17:0a:e5:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2
	I1028 04:53:20.474029    6653 main.go:141] libmachine: STDOUT: 
	I1028 04:53:20.474047    6653 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:53:20.474073    6653 fix.go:56] duration metric: took 14.6085ms for fixHost
	I1028 04:53:20.474078    6653 start.go:83] releasing machines lock for "newest-cni-800000", held for 14.622416ms
	W1028 04:53:20.474082    6653 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:53:20.474122    6653 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:20.474126    6653 start.go:729] Will try again in 5 seconds ...
	I1028 04:53:25.476379    6653 start.go:360] acquireMachinesLock for newest-cni-800000: {Name:mkf90fce2b269ff4467ddf999528feb94d5e332d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:53:25.476752    6653 start.go:364] duration metric: took 294.167µs to acquireMachinesLock for "newest-cni-800000"
	I1028 04:53:25.476871    6653 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:53:25.476889    6653 fix.go:54] fixHost starting: 
	I1028 04:53:25.477605    6653 fix.go:112] recreateIfNeeded on newest-cni-800000: state=Stopped err=<nil>
	W1028 04:53:25.477633    6653 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:53:25.486941    6653 out.go:177] * Restarting existing qemu2 VM for "newest-cni-800000" ...
	I1028 04:53:25.490165    6653 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:53:25.490399    6653 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:6d:17:0a:e5:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19876-1087/.minikube/machines/newest-cni-800000/disk.qcow2
	I1028 04:53:25.500041    6653 main.go:141] libmachine: STDOUT: 
	I1028 04:53:25.500096    6653 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:53:25.500179    6653 fix.go:56] duration metric: took 23.291458ms for fixHost
	I1028 04:53:25.500202    6653 start.go:83] releasing machines lock for "newest-cni-800000", held for 23.428625ms
	W1028 04:53:25.500394    6653 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-800000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-800000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:53:25.508053    6653 out.go:201] 
	W1028 04:53:25.512103    6653 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:53:25.512134    6653 out.go:270] * 
	* 
	W1028 04:53:25.514767    6653 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:53:25.521038    6653 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-800000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000: exit status 7 (74.502958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-800000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-800000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000: exit status 7 (35.479625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-800000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-800000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-800000 --alsologtostderr -v=1: exit status 83 (45.034458ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-800000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-800000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:53:25.725128    6667 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:53:25.725316    6667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:25.725319    6667 out.go:358] Setting ErrFile to fd 2...
	I1028 04:53:25.725322    6667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:53:25.725440    6667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 04:53:25.725653    6667 out.go:352] Setting JSON to false
	I1028 04:53:25.725661    6667 mustload.go:65] Loading cluster: newest-cni-800000
	I1028 04:53:25.725896    6667 config.go:182] Loaded profile config "newest-cni-800000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:53:25.729601    6667 out.go:177] * The control-plane node newest-cni-800000 host is not running: state=Stopped
	I1028 04:53:25.733438    6667 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-800000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-800000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000: exit status 7 (33.963167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-800000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000: exit status 7 (34.795459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-800000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (152/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 9.27
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 202.2
29 TestAddons/serial/Volcano 39.95
31 TestAddons/serial/GCPAuth/Namespaces 0.07
32 TestAddons/serial/GCPAuth/FakeCredentials 8.38
35 TestAddons/parallel/Registry 15.24
36 TestAddons/parallel/Ingress 16.7
37 TestAddons/parallel/InspektorGadget 11.27
38 TestAddons/parallel/MetricsServer 5.26
40 TestAddons/parallel/CSI 31.66
41 TestAddons/parallel/Headlamp 16.67
42 TestAddons/parallel/CloudSpanner 6.21
43 TestAddons/parallel/LocalPath 42.98
44 TestAddons/parallel/NvidiaDevicePlugin 6.18
45 TestAddons/parallel/Yakd 10.26
47 TestAddons/StoppedEnableDisable 12.44
55 TestHyperKitDriverInstallOrUpdate 11.03
58 TestErrorSpam/setup 35.22
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.26
61 TestErrorSpam/pause 0.7
62 TestErrorSpam/unpause 0.66
63 TestErrorSpam/stop 55.26
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 48.37
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 35.04
70 TestFunctional/serial/KubeContext 0.03
71 TestFunctional/serial/KubectlGetPods 0.04
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.96
75 TestFunctional/serial/CacheCmd/cache/add_local 1.15
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
79 TestFunctional/serial/CacheCmd/cache/cache_reload 0.75
80 TestFunctional/serial/CacheCmd/cache/delete 0.08
81 TestFunctional/serial/MinikubeKubectlCmd 0.8
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.16
83 TestFunctional/serial/ExtraConfig 36.75
85 TestFunctional/serial/LogsCmd 0.66
86 TestFunctional/serial/LogsFileCmd 0.65
87 TestFunctional/serial/InvalidService 4.69
89 TestFunctional/parallel/ConfigCmd 0.25
90 TestFunctional/parallel/DashboardCmd 8.91
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.25
98 TestFunctional/parallel/AddonsCmd 0.11
99 TestFunctional/parallel/PersistentVolumeClaim 25.74
101 TestFunctional/parallel/SSHCmd 0.13
102 TestFunctional/parallel/CpCmd 0.42
104 TestFunctional/parallel/FileSync 0.07
105 TestFunctional/parallel/CertSync 0.4
109 TestFunctional/parallel/NodeLabels 0.04
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.1
113 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.22
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
120 TestFunctional/parallel/ImageCommands/ImageBuild 1.96
121 TestFunctional/parallel/ImageCommands/Setup 1.66
122 TestFunctional/parallel/DockerEnv/bash 0.33
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
126 TestFunctional/parallel/ServiceCmd/DeployApp 12.09
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
139 TestFunctional/parallel/ServiceCmd/List 0.12
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
142 TestFunctional/parallel/ServiceCmd/Format 0.09
143 TestFunctional/parallel/ServiceCmd/URL 0.09
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
151 TestFunctional/parallel/ProfileCmd/profile_list 0.14
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
153 TestFunctional/parallel/MountCmd/any-port 5.32
154 TestFunctional/parallel/MountCmd/specific-port 1.16
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/CopyFile 0.04
176 TestImageBuild/serial/Setup 33.46
177 TestImageBuild/serial/NormalBuild 1.34
178 TestImageBuild/serial/BuildWithBuildArg 0.42
179 TestImageBuild/serial/BuildWithDockerIgnore 0.34
180 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.31
185 TestJSONOutput/start/Audit 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 6.18
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.22
212 TestMainNoArgs 0.04
213 TestMinikubeProfile 71.95
259 TestStoppedBinaryUpgrade/Setup 2.11
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.52
277 TestNoKubernetes/serial/Stop 3.58
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
294 TestStartStop/group/old-k8s-version/serial/Stop 2.14
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 3.4
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
318 TestStartStop/group/embed-certs/serial/Stop 3.65
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.6
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 3.46
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 03:40:10.123609    1598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1028 03:40:10.124051    1598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-381000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-381000: exit status 85 (97.053833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-381000 | jenkins | v1.34.0 | 28 Oct 24 03:39 PDT |          |
	|         | -p download-only-381000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 03:39:51
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 03:39:51.282165    1599 out.go:345] Setting OutFile to fd 1 ...
	I1028 03:39:51.282324    1599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:39:51.282328    1599 out.go:358] Setting ErrFile to fd 2...
	I1028 03:39:51.282330    1599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:39:51.282454    1599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	W1028 03:39:51.282563    1599 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19876-1087/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19876-1087/.minikube/config/config.json: no such file or directory
	I1028 03:39:51.283907    1599 out.go:352] Setting JSON to true
	I1028 03:39:51.302773    1599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":562,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 03:39:51.302849    1599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 03:39:51.308122    1599 out.go:97] [download-only-381000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 03:39:51.308279    1599 notify.go:220] Checking for updates...
	W1028 03:39:51.308335    1599 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 03:39:51.312058    1599 out.go:169] MINIKUBE_LOCATION=19876
	I1028 03:39:51.315164    1599 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:39:51.319074    1599 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 03:39:51.322120    1599 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 03:39:51.325159    1599 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	W1028 03:39:51.331074    1599 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 03:39:51.331291    1599 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 03:39:51.335143    1599 out.go:97] Using the qemu2 driver based on user configuration
	I1028 03:39:51.335162    1599 start.go:297] selected driver: qemu2
	I1028 03:39:51.335182    1599 start.go:901] validating driver "qemu2" against <nil>
	I1028 03:39:51.335233    1599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 03:39:51.338995    1599 out.go:169] Automatically selected the socket_vmnet network
	I1028 03:39:51.344993    1599 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1028 03:39:51.345082    1599 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 03:39:51.345139    1599 cni.go:84] Creating CNI manager for ""
	I1028 03:39:51.345184    1599 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 03:39:51.345250    1599 start.go:340] cluster config:
	{Name:download-only-381000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-381000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:39:51.349841    1599 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 03:39:51.354150    1599 out.go:97] Downloading VM boot image ...
	I1028 03:39:51.354167    1599 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso
	I1028 03:39:58.479399    1599 out.go:97] Starting "download-only-381000" primary control-plane node in "download-only-381000" cluster
	I1028 03:39:58.479430    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 03:39:58.538338    1599 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 03:39:58.538359    1599 cache.go:56] Caching tarball of preloaded images
	I1028 03:39:58.538580    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 03:39:58.542789    1599 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 03:39:58.542795    1599 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 03:39:58.622388    1599 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 03:40:08.899879    1599 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 03:40:08.900059    1599 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 03:40:09.593343    1599 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 03:40:09.593603    1599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/download-only-381000/config.json ...
	I1028 03:40:09.593620    1599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/download-only-381000/config.json: {Name:mk2a7c67cc474f3017fb2a3152723a48ce971025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 03:40:09.593906    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 03:40:09.594160    1599 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1028 03:40:10.075842    1599 out.go:193] 
	W1028 03:40:10.079959    1599 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19876-1087/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320 0x105e81320] Decompressors:map[bz2:0x14000125630 gz:0x14000125638 tar:0x140001255e0 tar.bz2:0x140001255f0 tar.gz:0x14000125600 tar.xz:0x14000125610 tar.zst:0x14000125620 tbz2:0x140001255f0 tgz:0x14000125600 txz:0x14000125610 tzst:0x14000125620 xz:0x14000125640 zip:0x14000125650 zst:0x14000125648] Getters:map[file:0x140018505a0 http:0x140006f20f0 https:0x140006f2140] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1028 03:40:10.079987    1599 out_reason.go:110] 
	W1028 03:40:10.087842    1599 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 03:40:10.091822    1599 out.go:193] 
	
	
	* The control-plane node download-only-381000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-381000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-381000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (9.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-352000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-352000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (9.2668705s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (9.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 03:40:19.770597    1598 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1028 03:40:19.770652    1598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-352000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-352000: exit status 85 (82.439875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-381000 | jenkins | v1.34.0 | 28 Oct 24 03:39 PDT |                     |
	|         | -p download-only-381000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 28 Oct 24 03:40 PDT | 28 Oct 24 03:40 PDT |
	| delete  | -p download-only-381000        | download-only-381000 | jenkins | v1.34.0 | 28 Oct 24 03:40 PDT | 28 Oct 24 03:40 PDT |
	| start   | -o=json --download-only        | download-only-352000 | jenkins | v1.34.0 | 28 Oct 24 03:40 PDT |                     |
	|         | -p download-only-352000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 03:40:10
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 03:40:10.534622    1625 out.go:345] Setting OutFile to fd 1 ...
	I1028 03:40:10.534774    1625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:40:10.534777    1625 out.go:358] Setting ErrFile to fd 2...
	I1028 03:40:10.534784    1625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:40:10.534900    1625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 03:40:10.536058    1625 out.go:352] Setting JSON to true
	I1028 03:40:10.553758    1625 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":581,"bootTime":1730111429,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 03:40:10.553844    1625 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 03:40:10.557625    1625 out.go:97] [download-only-352000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 03:40:10.557753    1625 notify.go:220] Checking for updates...
	I1028 03:40:10.561502    1625 out.go:169] MINIKUBE_LOCATION=19876
	I1028 03:40:10.564506    1625 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:40:10.568442    1625 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 03:40:10.571475    1625 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 03:40:10.574471    1625 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	W1028 03:40:10.580410    1625 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 03:40:10.580553    1625 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 03:40:10.583430    1625 out.go:97] Using the qemu2 driver based on user configuration
	I1028 03:40:10.583438    1625 start.go:297] selected driver: qemu2
	I1028 03:40:10.583441    1625 start.go:901] validating driver "qemu2" against <nil>
	I1028 03:40:10.583484    1625 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 03:40:10.586467    1625 out.go:169] Automatically selected the socket_vmnet network
	I1028 03:40:10.591726    1625 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1028 03:40:10.591812    1625 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 03:40:10.591833    1625 cni.go:84] Creating CNI manager for ""
	I1028 03:40:10.591858    1625 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 03:40:10.591870    1625 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 03:40:10.591906    1625 start.go:340] cluster config:
	{Name:download-only-352000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-352000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:40:10.596165    1625 iso.go:125] acquiring lock: {Name:mkd1d95948a0ce2d090772874f764eb344f56fe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 03:40:10.599465    1625 out.go:97] Starting "download-only-352000" primary control-plane node in "download-only-352000" cluster
	I1028 03:40:10.599480    1625 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 03:40:10.658808    1625 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 03:40:10.658819    1625 cache.go:56] Caching tarball of preloaded images
	I1028 03:40:10.659060    1625 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 03:40:10.664267    1625 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 03:40:10.664277    1625 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1028 03:40:10.745172    1625 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/19876-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-352000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-352000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-352000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 03:40:20.304103    1598 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-462000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-462000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-462000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-966000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-966000: exit status 85 (66.158375ms)

                                                
                                                
-- stdout --
	* Profile "addons-966000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-966000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-966000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-966000: exit status 85 (62.298916ms)

                                                
                                                
-- stdout --
	* Profile "addons-966000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-966000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (202.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-966000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-966000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m22.198131042s)
--- PASS: TestAddons/Setup (202.20s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 6.447833ms
addons_test.go:823: volcano-controller stabilized in 6.489083ms
addons_test.go:815: volcano-admission stabilized in 6.514958ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-qj2kc" [5408ede9-93a7-4954-9cd0-4587db52c660] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.010821083s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-5wwm7" [98185a9f-f4ed-425a-a04b-b97015ab590b] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.004666125s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-5qnrk" [e0ad46ea-093c-4b7e-adad-c5ccf1027adf] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003381041s
addons_test.go:842: (dbg) Run:  kubectl --context addons-966000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-966000 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-966000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [449edd08-2d34-42cf-899b-cf411b0d3813] Pending
helpers_test.go:344: "test-job-nginx-0" [449edd08-2d34-42cf-899b-cf411b0d3813] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [449edd08-2d34-42cf-899b-cf411b0d3813] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005024333s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-966000 addons disable volcano --alsologtostderr -v=1: (10.7139855s)
--- PASS: TestAddons/serial/Volcano (39.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-966000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-966000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.38s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-966000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-966000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3130ff7c-175c-4971-b764-ac4c20f0b8c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3130ff7c-175c-4971-b764-ac4c20f0b8c7] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.005389042s
addons_test.go:633: (dbg) Run:  kubectl --context addons-966000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-966000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-966000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-966000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.38s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.377208ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-9mkkz" [0630dc54-2290-4c8f-8883-b476685449ad] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005835208s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kcgq2" [f9eca648-f684-400a-ae91-6102a7f78de4] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010438542s
addons_test.go:331: (dbg) Run:  kubectl --context addons-966000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-966000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-966000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.913482958s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 ip
2024/10/28 03:44:55 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.24s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (16.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-966000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-966000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-966000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [acdb31fa-7702-4dd7-94c2-1440686aa0f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [acdb31fa-7702-4dd7-94c2-1440686aa0f5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00337875s
I1028 03:45:56.954355    1598 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-966000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-966000 addons disable ingress --alsologtostderr -v=1: (7.267596292s)
--- PASS: TestAddons/parallel/Ingress (16.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t8qp5" [b5d3269b-4554-45fd-8a31-76172e93ebd2] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.01249375s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-966000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.260792041s)
--- PASS: TestAddons/parallel/InspektorGadget (11.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.321875ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-7s827" [8ada22f6-f533-41d9-8d6d-b21ccc6ffac5] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004179875s
addons_test.go:402: (dbg) Run:  kubectl --context addons-966000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1028 03:45:16.999692    1598 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1028 03:45:17.002131    1598 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 03:45:17.002144    1598 kapi.go:107] duration metric: took 2.484208ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.490958ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-966000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-966000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9a8ef806-4421-4dca-ad5d-2a6921953120] Pending
helpers_test.go:344: "task-pv-pod" [9a8ef806-4421-4dca-ad5d-2a6921953120] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9a8ef806-4421-4dca-ad5d-2a6921953120] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.009843584s
addons_test.go:511: (dbg) Run:  kubectl --context addons-966000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-966000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-966000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-966000 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-966000 delete pod task-pv-pod: (1.235892375s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-966000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-966000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-966000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1dc03df9-c246-448d-a9d6-9cada8d3a813] Pending
helpers_test.go:344: "task-pv-pod-restore" [1dc03df9-c246-448d-a9d6-9cada8d3a813] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1dc03df9-c246-448d-a9d6-9cada8d3a813] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.011916083s
addons_test.go:553: (dbg) Run:  kubectl --context addons-966000 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-966000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-966000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-966000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.15594925s)
--- PASS: TestAddons/parallel/CSI (31.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-966000 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-jgrbr" [fdac2137-360a-4a38-85fd-897cc93dd5fe] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-jgrbr" [fdac2137-360a-4a38-85fd-897cc93dd5fe] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.0053475s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-966000 addons disable headlamp --alsologtostderr -v=1: (5.317123s)
--- PASS: TestAddons/parallel/Headlamp (16.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-7crwg" [6297ffa6-899f-481e-8296-75218c2dfd4e] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007924625s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (42.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-966000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-966000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [577b16a5-ae5f-40b1-b3d0-01beba5941a0] Pending
helpers_test.go:344: "test-local-path" [577b16a5-ae5f-40b1-b3d0-01beba5941a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [577b16a5-ae5f-40b1-b3d0-01beba5941a0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [577b16a5-ae5f-40b1-b3d0-01beba5941a0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.008644875s
addons_test.go:906: (dbg) Run:  kubectl --context addons-966000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 ssh "cat /opt/local-path-provisioner/pvc-c49ed02f-0b9c-4e1a-9848-f8582962613b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-966000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-966000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-966000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.429423583s)
--- PASS: TestAddons/parallel/LocalPath (42.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2d29t" [4d13df6c-6936-42e7-b2bb-465eff9cc6bb] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008910958s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pzjgw" [4fe6bce2-fe3f-4b7f-8b91-c6ea11fdf06d] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005491041s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-966000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-966000 addons disable yakd --alsologtostderr -v=1: (5.255490542s)
--- PASS: TestAddons/parallel/Yakd (10.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-966000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-966000: (12.238363375s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-966000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-966000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-966000
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.03s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1028 04:38:41.288486    1598 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 04:38:41.288740    1598 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
E1028 04:38:42.915473    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
W1028 04:38:43.262582    1598 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1028 04:38:43.262795    1598 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1028 04:38:43.262848    1598 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit
I1028 04:38:43.765636    1598 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10979a6e0 0x10979a6e0 0x10979a6e0 0x10979a6e0 0x10979a6e0 0x10979a6e0 0x10979a6e0] Decompressors:map[bz2:0x14000543430 gz:0x14000543438 tar:0x140005433e0 tar.bz2:0x140005433f0 tar.gz:0x14000543400 tar.xz:0x14000543410 tar.zst:0x14000543420 tbz2:0x140005433f0 tgz:0x14000543400 txz:0x14000543410 tzst:0x14000543420 xz:0x14000543440 zip:0x14000543450 zst:0x14000543448] Getters:map[file:0x140006a6f30 http:0x1400068db80 https:0x1400068dc70] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1028 04:38:43.765767    1598 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate265733836/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.03s)

                                                
                                    
x
+
TestErrorSpam/setup (35.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-196000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-196000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 --driver=qemu2 : (35.215111292s)
--- PASS: TestErrorSpam/setup (35.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 pause
--- PASS: TestErrorSpam/pause (0.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 unpause
--- PASS: TestErrorSpam/unpause (0.66s)

                                                
                                    
x
+
TestErrorSpam/stop (55.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 stop: (3.179535083s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 stop: (26.03780275s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-196000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-196000 stop: (26.037687666s)
--- PASS: TestErrorSpam/stop (55.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19876-1087/.minikube/files/etc/test/nested/copy/1598/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-940000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-940000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.36787025s)
--- PASS: TestFunctional/serial/StartWithProxy (48.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 03:48:38.974933    1598 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-940000 --alsologtostderr -v=8
E1028 03:48:42.897071    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:42.904643    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:42.918142    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:42.941764    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:42.985236    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:43.068959    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:43.231606    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:43.555212    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:44.199022    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:45.482609    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:48.046431    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:48:53.169459    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
E1028 03:49:03.413201    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-940000 --alsologtostderr -v=8: (35.037071625s)
functional_test.go:663: soft start took 35.037539291s for "functional-940000" cluster.
I1028 03:49:14.012361    1598 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (35.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-940000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-940000 cache add registry.k8s.io/pause:3.1: (1.119366917s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3922351527/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cache add minikube-local-cache-test:functional-940000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cache delete minikube-local-cache-test:functional-940000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-940000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (71.152125ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 kubectl -- --context functional-940000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.80s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-940000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-940000 get pods: (1.160885584s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-940000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1028 03:49:23.895420    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-940000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.753042166s)
functional_test.go:761: restart took 36.753117s for "functional-940000" cluster.
I1028 03:49:57.889771    1598 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (36.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2774238911/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.69s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-940000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-940000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-940000: exit status 115 (136.98975ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30208 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-940000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-940000 delete -f testdata/invalidsvc.yaml: (1.455864333s)
--- PASS: TestFunctional/serial/InvalidService (4.69s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 config get cpus: exit status 14 (35.184958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 config get cpus: exit status 14 (36.143875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-940000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-940000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2479: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-940000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-940000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (139.794959ms)

                                                
                                                
-- stdout --
	* [functional-940000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 03:50:53.709485    2445 out.go:345] Setting OutFile to fd 1 ...
	I1028 03:50:53.709657    2445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:50:53.709661    2445 out.go:358] Setting ErrFile to fd 2...
	I1028 03:50:53.709663    2445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:50:53.709816    2445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 03:50:53.710949    2445 out.go:352] Setting JSON to false
	I1028 03:50:53.733323    2445 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1224,"bootTime":1730111429,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 03:50:53.733416    2445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 03:50:53.738336    2445 out.go:177] * [functional-940000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 03:50:53.746373    2445 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 03:50:53.746380    2445 notify.go:220] Checking for updates...
	I1028 03:50:53.754277    2445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:50:53.758283    2445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 03:50:53.764232    2445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 03:50:53.768399    2445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 03:50:53.771395    2445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 03:50:53.775679    2445 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 03:50:53.775947    2445 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 03:50:53.780267    2445 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 03:50:53.787306    2445 start.go:297] selected driver: qemu2
	I1028 03:50:53.787318    2445 start.go:901] validating driver "qemu2" against &{Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:50:53.787377    2445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 03:50:53.793730    2445 out.go:201] 
	W1028 03:50:53.797281    2445 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 03:50:53.801319    2445 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-940000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-940000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-940000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (141.246041ms)

                                                
                                                
-- stdout --
	* [functional-940000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 03:50:53.879593    2455 out.go:345] Setting OutFile to fd 1 ...
	I1028 03:50:53.879725    2455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:50:53.879729    2455 out.go:358] Setting ErrFile to fd 2...
	I1028 03:50:53.879731    2455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 03:50:53.879853    2455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
	I1028 03:50:53.883438    2455 out.go:352] Setting JSON to false
	I1028 03:50:53.902555    2455 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1224,"bootTime":1730111429,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 03:50:53.902646    2455 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 03:50:53.906367    2455 out.go:177] * [functional-940000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1028 03:50:53.913231    2455 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 03:50:53.913264    2455 notify.go:220] Checking for updates...
	I1028 03:50:53.924190    2455 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	I1028 03:50:53.931294    2455 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 03:50:53.934336    2455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 03:50:53.937274    2455 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	I1028 03:50:53.948327    2455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 03:50:53.956651    2455 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 03:50:53.956912    2455 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 03:50:53.960324    2455 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1028 03:50:53.968295    2455 start.go:297] selected driver: qemu2
	I1028 03:50:53.968306    2455 start.go:901] validating driver "qemu2" against &{Name:functional-940000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 03:50:53.968360    2455 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 03:50:53.974332    2455 out.go:201] 
	W1028 03:50:53.977247    2455 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 03:50:53.981350    2455 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [58ae0bb2-4407-4b87-a38d-80e08b35bf8e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00825275s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-940000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-940000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-940000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-940000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bac02e96-1a17-40e2-bc1d-240d9e7ea322] Pending
helpers_test.go:344: "sp-pod" [bac02e96-1a17-40e2-bc1d-240d9e7ea322] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bac02e96-1a17-40e2-bc1d-240d9e7ea322] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007859791s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-940000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-940000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-940000 delete -f testdata/storage-provisioner/pod.yaml: (1.211466916s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-940000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7d24d8b6-8684-4ef8-be5f-99aea7bedfd3] Pending
helpers_test.go:344: "sp-pod" [7d24d8b6-8684-4ef8-be5f-99aea7bedfd3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7d24d8b6-8684-4ef8-be5f-99aea7bedfd3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011864s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-940000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cp testdata/cp-test.txt /home/docker/cp-test.txt
E1028 03:50:04.857931    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/addons-966000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh -n functional-940000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cp functional-940000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd787618629/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh -n functional-940000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh -n functional-940000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1598/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo cat /etc/test/nested/copy/1598/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1598.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo cat /etc/ssl/certs/1598.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1598.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo cat /usr/share/ca-certificates/1598.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo cat /etc/ssl/certs/15982.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo cat /usr/share/ca-certificates/15982.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-940000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 ssh "sudo systemctl is-active crio": exit status 1 (104.831334ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-940000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-940000
docker.io/kicbase/echo-server:functional-940000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-940000 image ls --format short --alsologtostderr:
I1028 03:50:54.470120    2475 out.go:345] Setting OutFile to fd 1 ...
I1028 03:50:54.474898    2475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:54.474907    2475 out.go:358] Setting ErrFile to fd 2...
I1028 03:50:54.474910    2475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:54.475081    2475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
I1028 03:50:54.475564    2475 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:54.475631    2475 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:54.476523    2475 ssh_runner.go:195] Run: systemctl --version
I1028 03:50:54.476533    2475 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
I1028 03:50:54.499539    2475 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-940000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/minikube-local-cache-test | functional-940000 | 74d02c56f32d0 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.2           | f9c26480f1e72 | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.2           | d6b061e73ae45 | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 021d242013305 | 94.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                     | alpine            | 577a23b5858b9 | 50.8MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-940000 | 8c4443ee4ee41 | 1.41MB |
| docker.io/library/nginx                     | latest            | 4b196525bd3cc | 197MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kicbase/echo-server               | functional-940000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 9404aea098d9e | 85.9MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-940000 image ls --format table --alsologtostderr:
I1028 03:50:56.665156    2491 out.go:345] Setting OutFile to fd 1 ...
I1028 03:50:56.665327    2491 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:56.665330    2491 out.go:358] Setting ErrFile to fd 2...
I1028 03:50:56.665333    2491 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:56.665464    2491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
I1028 03:50:56.665867    2491 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:56.665925    2491 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:56.666704    2491 ssh_runner.go:195] Run: systemctl --version
I1028 03:50:56.666712    2491 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
I1028 03:50:56.689566    2491 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/10/28 03:51:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-940000 image ls --format json --alsologtostderr:
[{"id":"f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"91600000"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"85900000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-940000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"74d02c56f32d0b8384361648a5b18799b6f0a92bb83fe59cc8bf36e74c2824b7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-940000"],"size":"30"
},{"id":"d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"66000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8c4443ee4ee4140ff681f617f2e6a6315b010852f8f8ce83f5a0e6ea8e540ae2","repoDigests":[],"repoTags":["localhost/my-image:functional-940000"],"size":"1410000"},{"id":"4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDiges
ts":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"50800000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-940000 image ls --format json --alsologtostderr:
I1028 03:50:56.583465    2489 out.go:345] Setting OutFile to fd 1 ...
I1028 03:50:56.583639    2489 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:56.583643    2489 out.go:358] Setting ErrFile to fd 2...
I1028 03:50:56.583645    2489 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:56.583779    2489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
I1028 03:50:56.584823    2489 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:56.585272    2489 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:56.586209    2489 ssh_runner.go:195] Run: systemctl --version
I1028 03:50:56.586218    2489 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
I1028 03:50:56.608211    2489 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-940000 image ls --format yaml --alsologtostderr:
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-940000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 74d02c56f32d0b8384361648a5b18799b6f0a92bb83fe59cc8bf36e74c2824b7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-940000
size: "30"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "91600000"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "66000000"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "85900000"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "50800000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-940000 image ls --format yaml --alsologtostderr:
I1028 03:50:54.556216    2477 out.go:345] Setting OutFile to fd 1 ...
I1028 03:50:54.556437    2477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:54.556440    2477 out.go:358] Setting ErrFile to fd 2...
I1028 03:50:54.556443    2477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:54.556556    2477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
I1028 03:50:54.557059    2477 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:54.557122    2477 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:54.558028    2477 ssh_runner.go:195] Run: systemctl --version
I1028 03:50:54.558037    2477 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
I1028 03:50:54.580352    2477 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 ssh pgrep buildkitd: exit status 1 (60.580917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image build -t localhost/my-image:functional-940000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-940000 image build -t localhost/my-image:functional-940000 testdata/build --alsologtostderr: (1.819309208s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-940000 image build -t localhost/my-image:functional-940000 testdata/build --alsologtostderr:
I1028 03:50:54.686934    2482 out.go:345] Setting OutFile to fd 1 ...
I1028 03:50:54.687179    2482 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:54.687183    2482 out.go:358] Setting ErrFile to fd 2...
I1028 03:50:54.687185    2482 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 03:50:54.687312    2482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19876-1087/.minikube/bin
I1028 03:50:54.687749    2482 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:54.688475    2482 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 03:50:54.689360    2482 ssh_runner.go:195] Run: systemctl --version
I1028 03:50:54.689368    2482 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19876-1087/.minikube/machines/functional-940000/id_rsa Username:docker}
I1028 03:50:54.710641    2482 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1341877992.tar
I1028 03:50:54.710700    2482 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 03:50:54.714303    2482 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1341877992.tar
I1028 03:50:54.715914    2482 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1341877992.tar: stat -c "%s %y" /var/lib/minikube/build/build.1341877992.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1341877992.tar': No such file or directory
I1028 03:50:54.715925    2482 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1341877992.tar --> /var/lib/minikube/build/build.1341877992.tar (3072 bytes)
I1028 03:50:54.724634    2482 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1341877992
I1028 03:50:54.730844    2482 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1341877992 -xf /var/lib/minikube/build/build.1341877992.tar
I1028 03:50:54.736128    2482 docker.go:360] Building image: /var/lib/minikube/build/build.1341877992
I1028 03:50:54.736204    2482 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-940000 /var/lib/minikube/build/build.1341877992
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8c4443ee4ee4140ff681f617f2e6a6315b010852f8f8ce83f5a0e6ea8e540ae2 done
#8 naming to localhost/my-image:functional-940000 done
#8 DONE 0.0s
I1028 03:50:56.407414    2482 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-940000 /var/lib/minikube/build/build.1341877992: (1.671192s)
I1028 03:50:56.407499    2482 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1341877992
I1028 03:50:56.411141    2482 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1341877992.tar
I1028 03:50:56.415925    2482 build_images.go:217] Built localhost/my-image:functional-940000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1341877992.tar
I1028 03:50:56.415944    2482 build_images.go:133] succeeded building to: functional-940000
I1028 03:50:56.415947    2482 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.6450205s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-940000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-940000 docker-env) && out/minikube-darwin-arm64 status -p functional-940000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-940000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-940000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-940000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-gfcdn" [4ef783a6-1e1d-4c4d-a4b9-e555cff18ccd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-gfcdn" [4ef783a6-1e1d-4c4d-a4b9-e555cff18ccd] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.007875375s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image load --daemon kicbase/echo-server:functional-940000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image load --daemon kicbase/echo-server:functional-940000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-940000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image load --daemon kicbase/echo-server:functional-940000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image save kicbase/echo-server:functional-940000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image rm kicbase/echo-server:functional-940000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-940000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 image save --daemon kicbase/echo-server:functional-940000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-940000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-940000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-940000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-940000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2280: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-940000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-940000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-940000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [868b67ed-b349-403f-86fb-72c55454fa7c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [868b67ed-b349-403f-86fb-72c55454fa7c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.006769667s
I1028 03:50:20.155560    1598 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 service list -o json
functional_test.go:1494: Took "82.750208ms" to run "out/minikube-darwin-arm64 -p functional-940000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32412
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32412
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-940000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.49.5 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1028 03:50:20.251558    1598 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1028 03:50:20.295492    1598 config.go:182] Loaded profile config "functional-940000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-940000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "97.954417ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "39.639ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "98.449ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.674291ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2044071837/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730112645179756000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2044071837/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730112645179756000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2044071837/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730112645179756000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2044071837/001/test-1730112645179756000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.522125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 03:50:45.242977    1598 retry.go:31] will retry after 555.311556ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 28 10:50 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 28 10:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 28 10:50 test-1730112645179756000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh cat /mount-9p/test-1730112645179756000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-940000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd] Pending
helpers_test.go:344: "busybox-mount" [c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c4b4a1dc-0e50-4a4d-8abe-b8850f89a1fd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008321916s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-940000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2044071837/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1454789778/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.676792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 03:50:50.564097    1598 retry.go:31] will retry after 624.202518ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1454789778/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 ssh "sudo umount -f /mount-9p": exit status 1 (64.493625ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-940000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1454789778/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T" /mount1: exit status 1 (70.379166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 03:50:51.734058    1598 retry.go:31] will retry after 475.562457ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T" /mount1: exit status 1 (89.682ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 03:50:52.301662    1598 retry.go:31] will retry after 1.106001582s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-940000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-940000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-940000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3506162450/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-940000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-940000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-940000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-921000 status --output json -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (33.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-516000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-516000 --driver=qemu2 : (33.463041708s)
--- PASS: TestImageBuild/serial/Setup (33.46s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-516000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-516000: (1.340998s)
--- PASS: TestImageBuild/serial/NormalBuild (1.34s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-516000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.42s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-516000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.34s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-516000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-990000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-990000 --output=json --user=testUser: (6.17847125s)
--- PASS: TestJSONOutput/stop/Command (6.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-463000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-463000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (100.998ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"177ac08a-338d-41b2-96be-5f268b04ad40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-463000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a6a39d4-1c08-4435-bcbd-064e036738dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"4f185f1e-637f-4f2d-87df-433382a6e72a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig"}}
	{"specversion":"1.0","id":"70c04e4f-f638-437b-9b9f-65343e1c603f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"905215ed-352c-4e0f-84da-7fb6c0baf819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3be2523f-4b6b-47b0-a018-6b6e4874fc53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube"}}
	{"specversion":"1.0","id":"db9d422c-e293-4509-ba2a-d5fb82cb8fd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"43b11db6-9350-49a2-81cf-4a3f68730220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-463000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-463000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (71.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-397000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-397000 --driver=qemu2 : (35.871046792s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-399000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-399000 --driver=qemu2 : (35.365468542s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-397000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-399000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-399000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-399000
helpers_test.go:175: Cleaning up "first-397000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-397000
--- PASS: TestMinikubeProfile (71.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-818000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-818000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (106.822542ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19876-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19876-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-818000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-818000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.596042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-818000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-818000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.667703292s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
E1028 04:50:06.286353    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19876-1087/.minikube/profiles/functional-940000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.847155708s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-818000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-818000: (3.584568542s)
--- PASS: TestNoKubernetes/serial/Stop (3.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-818000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-818000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.191ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-818000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-818000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-714000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-498000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-498000 --alsologtostderr -v=3: (2.135888208s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-498000 -n old-k8s-version-498000: exit status 7 (54.355416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-498000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-652000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-652000 --alsologtostderr -v=3: (3.397510958s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-652000 -n no-preload-652000: exit status 7 (56.741792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-652000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-420000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-420000 --alsologtostderr -v=3: (3.650257625s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-420000 -n embed-certs-420000: exit status 7 (61.097084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-420000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-892000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-892000 --alsologtostderr -v=3: (3.59918125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (58.599625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-892000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-800000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-800000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-800000 --alsologtostderr -v=3: (3.454980416s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-800000 -n newest-cni-800000: exit status 7 (62.95975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-800000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-196000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-196000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-196000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-196000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196000"

                                                
                                                
----------------------- debugLogs end: cilium-196000 [took: 2.343179084s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-196000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-196000
--- SKIP: TestNetworkPlugins/group/cilium (2.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-096000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-096000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard