Test Report: QEMU_macOS 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36297
                    
                

Test fail (99/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.3
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.26
22 TestOffline 9.99
33 TestAddons/parallel/Registry 71.26
45 TestCertOptions 10.18
46 TestCertExpiration 195.32
47 TestDockerFlags 10.08
48 TestForceSystemdFlag 10.21
49 TestForceSystemdEnv 12.43
94 TestFunctional/parallel/ServiceCmdConnect 31.05
166 TestMultiControlPlane/serial/StopSecondaryNode 64.13
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 51.93
168 TestMultiControlPlane/serial/RestartSecondaryNode 82.97
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.39
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 202.08
174 TestMultiControlPlane/serial/RestartCluster 5.25
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
180 TestImageBuild/serial/Setup 10.1
183 TestJSONOutput/start/Command 9.97
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.04
212 TestMinikubeProfile 10.29
215 TestMountStart/serial/StartWithMountFirst 10.26
218 TestMultiNode/serial/FreshStart2Nodes 9.96
219 TestMultiNode/serial/DeployApp2Nodes 113.1
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.08
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 55.77
227 TestMultiNode/serial/RestartKeepsNodes 7.48
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 2.72
230 TestMultiNode/serial/RestartMultiNode 5.26
231 TestMultiNode/serial/ValidateNameConflict 20.13
235 TestPreload 10.01
237 TestScheduledStopUnix 10.02
238 TestSkaffold 12.55
241 TestRunningBinaryUpgrade 600.6
243 TestKubernetesUpgrade 18.29
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.76
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.89
259 TestStoppedBinaryUpgrade/Upgrade 573.92
261 TestPause/serial/Start 10
271 TestNoKubernetes/serial/StartWithK8s 9.94
272 TestNoKubernetes/serial/StartWithStopK8s 5.33
273 TestNoKubernetes/serial/Start 5.3
277 TestNoKubernetes/serial/StartNoArgs 5.3
279 TestNetworkPlugins/group/auto/Start 9.81
280 TestNetworkPlugins/group/flannel/Start 9.96
281 TestNetworkPlugins/group/kindnet/Start 9.82
282 TestNetworkPlugins/group/enable-default-cni/Start 9.95
283 TestNetworkPlugins/group/bridge/Start 9.86
284 TestNetworkPlugins/group/kubenet/Start 9.85
285 TestNetworkPlugins/group/custom-flannel/Start 9.93
286 TestNetworkPlugins/group/calico/Start 9.88
287 TestNetworkPlugins/group/false/Start 9.89
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.88
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/old-k8s-version/serial/Pause 0.1
301 TestStartStop/group/no-preload/serial/FirstStart 9.84
303 TestStartStop/group/embed-certs/serial/FirstStart 11.47
304 TestStartStop/group/no-preload/serial/DeployApp 0.1
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.14
308 TestStartStop/group/no-preload/serial/SecondStart 6.17
309 TestStartStop/group/embed-certs/serial/DeployApp 0.1
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
314 TestStartStop/group/no-preload/serial/Pause 0.11
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.96
319 TestStartStop/group/embed-certs/serial/SecondStart 7.26
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/embed-certs/serial/Pause 0.11
328 TestStartStop/group/newest-cni/serial/FirstStart 9.96
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.46
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
339 TestStartStop/group/newest-cni/serial/SecondStart 5.25
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (15.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-310000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-310000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.296254584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5f36091-5639-46d4-9b3c-8a1ecf9d5b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-310000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e3f716f-aa1d-4810-b219-0da4492a241e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"0d2c8ae6-7304-40da-b4a7-f02ba61408c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig"}}
	{"specversion":"1.0","id":"7e7830bc-8093-4d44-9005-ea32693f8db4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b18ab36a-fdde-405e-b6ab-9a6c7a7cffb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"700f62d8-428d-45d1-b3ed-92d903e647b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube"}}
	{"specversion":"1.0","id":"9c88ec29-a710-4fe5-9742-27c0b45668ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"831d7835-9a37-4596-ad3e-472351559984","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"18d3d061-f117-458b-85a1-6a65e02ee6ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"d3dc0321-f149-4f39-8c4c-4966c31951a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"deb427a7-922e-475a-8fb6-bf5c9e7f8d0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-310000\" primary control-plane node in \"download-only-310000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"51635262-fa8b-40ba-8875-da612074f365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"761f02f8-5e2d-48dd-8e5f-eded9da1304e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0] Decompressors:map[bz2:0x140003bd770 gz:0x140003bd778 tar:0x140003bd6d0 tar.bz2:0x140003bd6e0 tar.gz:0x140003bd700 tar.xz:0x140003bd730 tar.zst:0x140003bd760 tbz2:0x140003bd6e0 tgz:0x14
0003bd700 txz:0x140003bd730 tzst:0x140003bd760 xz:0x140003bd780 zip:0x140003bd7c0 zst:0x140003bd788] Getters:map[file:0x14000111610 http:0x140006f21e0 https:0x140006f2230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"6a6aab11-2eae-4009-8a93-337cc79c503f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 09:43:17.830095    1683 out.go:345] Setting OutFile to fd 1 ...
	I0920 09:43:17.830239    1683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:17.830242    1683 out.go:358] Setting ErrFile to fd 2...
	I0920 09:43:17.830245    1683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:17.830372    1683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	W0920 09:43:17.830465    1683 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19672-1143/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19672-1143/.minikube/config/config.json: no such file or directory
	I0920 09:43:17.831789    1683 out.go:352] Setting JSON to true
	I0920 09:43:17.849353    1683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":760,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 09:43:17.849422    1683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 09:43:17.854772    1683 out.go:97] [download-only-310000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 09:43:17.854914    1683 notify.go:220] Checking for updates...
	W0920 09:43:17.854943    1683 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 09:43:17.858719    1683 out.go:169] MINIKUBE_LOCATION=19672
	I0920 09:43:17.863739    1683 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 09:43:17.868541    1683 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 09:43:17.872692    1683 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 09:43:17.875778    1683 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	W0920 09:43:17.880690    1683 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 09:43:17.880895    1683 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 09:43:17.885739    1683 out.go:97] Using the qemu2 driver based on user configuration
	I0920 09:43:17.885760    1683 start.go:297] selected driver: qemu2
	I0920 09:43:17.885765    1683 start.go:901] validating driver "qemu2" against <nil>
	I0920 09:43:17.885845    1683 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 09:43:17.889750    1683 out.go:169] Automatically selected the socket_vmnet network
	I0920 09:43:17.895488    1683 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 09:43:17.895580    1683 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 09:43:17.895637    1683 cni.go:84] Creating CNI manager for ""
	I0920 09:43:17.895682    1683 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 09:43:17.895735    1683 start.go:340] cluster config:
	{Name:download-only-310000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 09:43:17.900969    1683 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:43:17.904741    1683 out.go:97] Downloading VM boot image ...
	I0920 09:43:17.904759    1683 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso
	I0920 09:43:24.209856    1683 out.go:97] Starting "download-only-310000" primary control-plane node in "download-only-310000" cluster
	I0920 09:43:24.209880    1683 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 09:43:24.272642    1683 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 09:43:24.272649    1683 cache.go:56] Caching tarball of preloaded images
	I0920 09:43:24.272829    1683 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 09:43:24.276893    1683 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 09:43:24.276900    1683 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 09:43:24.381457    1683 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 09:43:31.866611    1683 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 09:43:31.866782    1683 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 09:43:32.562132    1683 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 09:43:32.562341    1683 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/download-only-310000/config.json ...
	I0920 09:43:32.562359    1683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/download-only-310000/config.json: {Name:mk2133cfae0407a99eccceb5760ad0dbcf4779df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:43:32.562603    1683 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 09:43:32.562802    1683 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0920 09:43:33.045087    1683 out.go:193] 
	W0920 09:43:33.052019    1683 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0] Decompressors:map[bz2:0x140003bd770 gz:0x140003bd778 tar:0x140003bd6d0 tar.bz2:0x140003bd6e0 tar.gz:0x140003bd700 tar.xz:0x140003bd730 tar.zst:0x140003bd760 tbz2:0x140003bd6e0 tgz:0x140003bd700 txz:0x140003bd730 tzst:0x140003bd760 xz:0x140003bd780 zip:0x140003bd7c0 zst:0x140003bd788] Getters:map[file:0x14000111610 http:0x140006f21e0 https:0x140006f2230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0920 09:43:33.052048    1683 out_reason.go:110] 
	W0920 09:43:33.061869    1683 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 09:43:33.064973    1683 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-310000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.26s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 09:43:40.692749    1679 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-190000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-190000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 : exit status 40 (153.357ms)

                                                
                                                
-- stdout --
	* [binary-mirror-190000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-190000" primary control-plane node in "binary-mirror-190000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 09:43:40.751363    1745 out.go:345] Setting OutFile to fd 1 ...
	I0920 09:43:40.751506    1745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:40.751510    1745 out.go:358] Setting ErrFile to fd 2...
	I0920 09:43:40.751513    1745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:40.751647    1745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 09:43:40.752702    1745 out.go:352] Setting JSON to false
	I0920 09:43:40.769080    1745 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":783,"bootTime":1726849837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 09:43:40.769136    1745 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 09:43:40.773262    1745 out.go:177] * [binary-mirror-190000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 09:43:40.783335    1745 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 09:43:40.783361    1745 notify.go:220] Checking for updates...
	I0920 09:43:40.791376    1745 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 09:43:40.794174    1745 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 09:43:40.797258    1745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 09:43:40.800267    1745 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 09:43:40.801924    1745 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 09:43:40.806266    1745 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 09:43:40.813102    1745 start.go:297] selected driver: qemu2
	I0920 09:43:40.813108    1745 start.go:901] validating driver "qemu2" against <nil>
	I0920 09:43:40.813155    1745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 09:43:40.816216    1745 out.go:177] * Automatically selected the socket_vmnet network
	I0920 09:43:40.821600    1745 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 09:43:40.821694    1745 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 09:43:40.821716    1745 cni.go:84] Creating CNI manager for ""
	I0920 09:43:40.821742    1745 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 09:43:40.821751    1745 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 09:43:40.821789    1745 start.go:340] cluster config:
	{Name:binary-mirror-190000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49313 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 09:43:40.825318    1745 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:43:40.833229    1745 out.go:177] * Starting "binary-mirror-190000" primary control-plane node in "binary-mirror-190000" cluster
	I0920 09:43:40.837305    1745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 09:43:40.837325    1745 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 09:43:40.837334    1745 cache.go:56] Caching tarball of preloaded images
	I0920 09:43:40.837408    1745 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 09:43:40.837414    1745 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 09:43:40.837636    1745 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/binary-mirror-190000/config.json ...
	I0920 09:43:40.837648    1745 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/binary-mirror-190000/config.json: {Name:mk3f7536ccf54833b4cf028629d1471760921747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:43:40.838032    1745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 09:43:40.838084    1745 download.go:107] Downloading: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0920 09:43:40.852228    1745 out.go:201] 
	W0920 09:43:40.856256    1745 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073856c0 0x1073856c0 0x1073856c0 0x1073856c0 0x1073856c0 0x1073856c0 0x1073856c0] Decompressors:map[bz2:0x140004bee40 gz:0x140004bee48 tar:0x140004bedf0 tar.bz2:0x140004bee00 tar.gz:0x140004bee10 tar.xz:0x140004bee20 tar.zst:0x140004bee30 tbz2:0x140004bee00 tgz:0x140004bee10 txz:0x140004bee20 tzst:0x140004bee30 xz:0x140004bee50 zip:0x140004bee60 zst:0x140004bee58] Getters:map[file:0x1400090db20 http:0x14000697270 https:0x140006972c0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073856c0 0x1073856c0 0x1073856c0 0x1073856c0 0x1073856c0 0x1073856c0 0x1073856c0] Decompressors:map[bz2:0x140004bee40 gz:0x140004bee48 tar:0x140004bedf0 tar.bz2:0x140004bee00 tar.gz:0x140004bee10 tar.xz:0x140004bee20 tar.zst:0x140004bee30 tbz2:0x140004bee00 tgz:0x140004bee10 txz:0x140004bee20 tzst:0x140004bee30 xz:0x140004bee50 zip:0x140004bee60 zst:0x140004bee58] Getters:map[file:0x1400090db20 http:0x14000697270 https:0x140006972c0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0920 09:43:40.856262    1745 out.go:270] * 
	* 
	W0920 09:43:40.856715    1745 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 09:43:40.868271    1745 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-190000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49313" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-190000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-190000
--- FAIL: TestBinaryMirror (0.26s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-042000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-042000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.830356167s)

                                                
                                                
-- stdout --
	* [offline-docker-042000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-042000" primary control-plane node in "offline-docker-042000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-042000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:23:01.109147    3970 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:23:01.109290    3970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:23:01.109293    3970 out.go:358] Setting ErrFile to fd 2...
	I0920 10:23:01.109296    3970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:23:01.109428    3970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:23:01.110504    3970 out.go:352] Setting JSON to false
	I0920 10:23:01.128254    3970 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3144,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:23:01.128319    3970 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:23:01.134122    3970 out.go:177] * [offline-docker-042000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:23:01.142005    3970 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:23:01.142010    3970 notify.go:220] Checking for updates...
	I0920 10:23:01.148931    3970 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:23:01.152076    3970 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:23:01.155032    3970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:23:01.156249    3970 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:23:01.158969    3970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:23:01.162352    3970 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:23:01.162408    3970 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:23:01.165803    3970 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:23:01.173032    3970 start.go:297] selected driver: qemu2
	I0920 10:23:01.173043    3970 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:23:01.173050    3970 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:23:01.174879    3970 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:23:01.178858    3970 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:23:01.182073    3970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:23:01.182093    3970 cni.go:84] Creating CNI manager for ""
	I0920 10:23:01.182120    3970 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:23:01.182129    3970 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:23:01.182164    3970 start.go:340] cluster config:
	{Name:offline-docker-042000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:23:01.185626    3970 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:23:01.191966    3970 out.go:177] * Starting "offline-docker-042000" primary control-plane node in "offline-docker-042000" cluster
	I0920 10:23:01.195937    3970 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:23:01.195969    3970 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:23:01.195976    3970 cache.go:56] Caching tarball of preloaded images
	I0920 10:23:01.196045    3970 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:23:01.196050    3970 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:23:01.196109    3970 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/offline-docker-042000/config.json ...
	I0920 10:23:01.196118    3970 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/offline-docker-042000/config.json: {Name:mk8b71a7dc81d966176ebc5bfaa602a304373b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:23:01.196435    3970 start.go:360] acquireMachinesLock for offline-docker-042000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:23:01.196470    3970 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "offline-docker-042000"
	I0920 10:23:01.196484    3970 start.go:93] Provisioning new machine with config: &{Name:offline-docker-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:23:01.196509    3970 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:23:01.204956    3970 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:23:01.221067    3970 start.go:159] libmachine.API.Create for "offline-docker-042000" (driver="qemu2")
	I0920 10:23:01.221098    3970 client.go:168] LocalClient.Create starting
	I0920 10:23:01.221182    3970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:23:01.221215    3970 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:01.221225    3970 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:01.221269    3970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:23:01.221293    3970 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:01.221302    3970 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:01.221671    3970 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:23:01.384355    3970 main.go:141] libmachine: Creating SSH key...
	I0920 10:23:01.518681    3970 main.go:141] libmachine: Creating Disk image...
	I0920 10:23:01.518692    3970 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:23:01.518878    3970 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2
	I0920 10:23:01.528224    3970 main.go:141] libmachine: STDOUT: 
	I0920 10:23:01.528253    3970 main.go:141] libmachine: STDERR: 
	I0920 10:23:01.528329    3970 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2 +20000M
	I0920 10:23:01.536877    3970 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:23:01.536893    3970 main.go:141] libmachine: STDERR: 
	I0920 10:23:01.536909    3970 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2
	I0920 10:23:01.536915    3970 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:23:01.536927    3970 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:23:01.536956    3970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:2a:9b:a3:72:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2
	I0920 10:23:01.538709    3970 main.go:141] libmachine: STDOUT: 
	I0920 10:23:01.538732    3970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:23:01.538756    3970 client.go:171] duration metric: took 317.659959ms to LocalClient.Create
	I0920 10:23:03.540584    3970 start.go:128] duration metric: took 2.344131208s to createHost
	I0920 10:23:03.540609    3970 start.go:83] releasing machines lock for "offline-docker-042000", held for 2.344200583s
	W0920 10:23:03.540625    3970 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:03.556407    3970 out.go:177] * Deleting "offline-docker-042000" in qemu2 ...
	W0920 10:23:03.570067    3970 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:03.570083    3970 start.go:729] Will try again in 5 seconds ...
	I0920 10:23:08.572015    3970 start.go:360] acquireMachinesLock for offline-docker-042000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:23:08.572141    3970 start.go:364] duration metric: took 102.917µs to acquireMachinesLock for "offline-docker-042000"
	I0920 10:23:08.572487    3970 start.go:93] Provisioning new machine with config: &{Name:offline-docker-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:23:08.572550    3970 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:23:08.583750    3970 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:23:08.599530    3970 start.go:159] libmachine.API.Create for "offline-docker-042000" (driver="qemu2")
	I0920 10:23:08.599558    3970 client.go:168] LocalClient.Create starting
	I0920 10:23:08.599625    3970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:23:08.599664    3970 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:08.599672    3970 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:08.599704    3970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:23:08.599726    3970 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:08.599738    3970 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:08.600054    3970 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:23:08.764896    3970 main.go:141] libmachine: Creating SSH key...
	I0920 10:23:08.843224    3970 main.go:141] libmachine: Creating Disk image...
	I0920 10:23:08.843240    3970 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:23:08.843480    3970 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2
	I0920 10:23:08.853167    3970 main.go:141] libmachine: STDOUT: 
	I0920 10:23:08.853183    3970 main.go:141] libmachine: STDERR: 
	I0920 10:23:08.853262    3970 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2 +20000M
	I0920 10:23:08.861525    3970 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:23:08.861557    3970 main.go:141] libmachine: STDERR: 
	I0920 10:23:08.861573    3970 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2
	I0920 10:23:08.861579    3970 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:23:08.861590    3970 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:23:08.861619    3970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b2:43:b0:92:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/offline-docker-042000/disk.qcow2
	I0920 10:23:08.863291    3970 main.go:141] libmachine: STDOUT: 
	I0920 10:23:08.863306    3970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:23:08.863322    3970 client.go:171] duration metric: took 263.768ms to LocalClient.Create
	I0920 10:23:10.865467    3970 start.go:128] duration metric: took 2.292955584s to createHost
	I0920 10:23:10.865558    3970 start.go:83] releasing machines lock for "offline-docker-042000", held for 2.293469917s
	W0920 10:23:10.865915    3970 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-042000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-042000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:10.881499    3970 out.go:201] 
	W0920 10:23:10.886478    3970 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:23:10.886504    3970 out.go:270] * 
	* 
	W0920 10:23:10.889097    3970 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:23:10.897450    3970 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-042000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-20 10:23:10.911012 -0700 PDT m=+2393.207819209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-042000 -n offline-docker-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-042000 -n offline-docker-042000: exit status 7 (68.915167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-042000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-042000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-042000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.707292ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-72d7c" [5f7510b7-98b0-47da-bdfc-1e2ed64223f4] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006479792s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5jb54" [8f659e70-1afc-409c-9508-da9cd6399f18] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009408791s
addons_test.go:338: (dbg) Run:  kubectl --context addons-649000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-649000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-649000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.050003334s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-649000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 ip
2024/09/20 09:56:49 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-649000 -n addons-649000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-310000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT |                     |
	|         | -p download-only-310000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| delete  | -p download-only-310000                                                                     | download-only-310000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| start   | -o=json --download-only                                                                     | download-only-135000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT |                     |
	|         | -p download-only-135000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| delete  | -p download-only-135000                                                                     | download-only-135000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| delete  | -p download-only-310000                                                                     | download-only-310000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| delete  | -p download-only-135000                                                                     | download-only-135000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-190000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT |                     |
	|         | binary-mirror-190000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49313                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-190000                                                                     | binary-mirror-190000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| addons  | disable dashboard -p                                                                        | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT |                     |
	|         | addons-649000                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT |                     |
	|         | addons-649000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-649000 --wait=true                                                                | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:46 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649000 addons disable                                                                | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:47 PDT | 20 Sep 24 09:47 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:55 PDT | 20 Sep 24 09:55 PDT |
	|         | -p addons-649000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649000 addons disable                                                                | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:55 PDT | 20 Sep 24 09:55 PDT |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649000 addons disable                                                                | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:56 PDT | 20 Sep 24 09:56 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:56 PDT | 20 Sep 24 09:56 PDT |
	|         | -p addons-649000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-649000 ssh cat                                                                       | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:56 PDT | 20 Sep 24 09:56 PDT |
	|         | /opt/local-path-provisioner/pvc-22cbbd5a-51bb-437b-855a-250da94f44d8_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-649000 addons disable                                                                | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:56 PDT | 20 Sep 24 09:56 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:56 PDT | 20 Sep 24 09:56 PDT |
	|         | addons-649000                                                                               |                      |         |         |                     |                     |
	| addons  | addons-649000 addons                                                                        | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:56 PDT | 20 Sep 24 09:56 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-649000 ip                                                                            | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:56 PDT | 20 Sep 24 09:56 PDT |
	| addons  | addons-649000 addons disable                                                                | addons-649000        | jenkins | v1.34.0 | 20 Sep 24 09:56 PDT | 20 Sep 24 09:56 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 09:43:41
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 09:43:41.038884    1759 out.go:345] Setting OutFile to fd 1 ...
	I0920 09:43:41.039011    1759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:41.039014    1759 out.go:358] Setting ErrFile to fd 2...
	I0920 09:43:41.039016    1759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:41.039171    1759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 09:43:41.040275    1759 out.go:352] Setting JSON to false
	I0920 09:43:41.056186    1759 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":784,"bootTime":1726849837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 09:43:41.056247    1759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 09:43:41.061311    1759 out.go:177] * [addons-649000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 09:43:41.068321    1759 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 09:43:41.068383    1759 notify.go:220] Checking for updates...
	I0920 09:43:41.075249    1759 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 09:43:41.078245    1759 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 09:43:41.088892    1759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 09:43:41.092287    1759 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 09:43:41.095233    1759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 09:43:41.098403    1759 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 09:43:41.102199    1759 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 09:43:41.109290    1759 start.go:297] selected driver: qemu2
	I0920 09:43:41.109298    1759 start.go:901] validating driver "qemu2" against <nil>
	I0920 09:43:41.109305    1759 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 09:43:41.111565    1759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 09:43:41.115185    1759 out.go:177] * Automatically selected the socket_vmnet network
	I0920 09:43:41.118316    1759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 09:43:41.118337    1759 cni.go:84] Creating CNI manager for ""
	I0920 09:43:41.118383    1759 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 09:43:41.118389    1759 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 09:43:41.118432    1759 start.go:340] cluster config:
	{Name:addons-649000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-649000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 09:43:41.122349    1759 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:43:41.130216    1759 out.go:177] * Starting "addons-649000" primary control-plane node in "addons-649000" cluster
	I0920 09:43:41.134220    1759 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 09:43:41.134250    1759 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 09:43:41.134256    1759 cache.go:56] Caching tarball of preloaded images
	I0920 09:43:41.134325    1759 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 09:43:41.134331    1759 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 09:43:41.134532    1759 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/config.json ...
	I0920 09:43:41.134544    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/config.json: {Name:mk1da997b7f13aaed8b89fcfbc6e283bf051d838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:43:41.134975    1759 start.go:360] acquireMachinesLock for addons-649000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 09:43:41.135050    1759 start.go:364] duration metric: took 68.042µs to acquireMachinesLock for "addons-649000"
	I0920 09:43:41.135062    1759 start.go:93] Provisioning new machine with config: &{Name:addons-649000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-649000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 09:43:41.135113    1759 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 09:43:41.139286    1759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 09:43:41.379460    1759 start.go:159] libmachine.API.Create for "addons-649000" (driver="qemu2")
	I0920 09:43:41.379511    1759 client.go:168] LocalClient.Create starting
	I0920 09:43:41.379690    1759 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 09:43:41.530530    1759 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 09:43:41.661791    1759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 09:43:41.854779    1759 main.go:141] libmachine: Creating SSH key...
	I0920 09:43:41.960678    1759 main.go:141] libmachine: Creating Disk image...
	I0920 09:43:41.960688    1759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 09:43:41.960942    1759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/disk.qcow2
	I0920 09:43:41.980137    1759 main.go:141] libmachine: STDOUT: 
	I0920 09:43:41.980166    1759 main.go:141] libmachine: STDERR: 
	I0920 09:43:41.980235    1759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/disk.qcow2 +20000M
	I0920 09:43:41.988205    1759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 09:43:41.988221    1759 main.go:141] libmachine: STDERR: 
	I0920 09:43:41.988239    1759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/disk.qcow2
	I0920 09:43:41.988246    1759 main.go:141] libmachine: Starting QEMU VM...
	I0920 09:43:41.988283    1759 qemu.go:418] Using hvf for hardware acceleration
	I0920 09:43:41.988313    1759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:5c:d7:f9:f2:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/disk.qcow2
	I0920 09:43:42.044087    1759 main.go:141] libmachine: STDOUT: 
	I0920 09:43:42.044127    1759 main.go:141] libmachine: STDERR: 
	I0920 09:43:42.044131    1759 main.go:141] libmachine: Attempt 0
	I0920 09:43:42.044149    1759 main.go:141] libmachine: Searching for 32:5c:d7:f9:f2:9d in /var/db/dhcpd_leases ...
	I0920 09:43:42.044199    1759 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0920 09:43:42.044219    1759 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eef779}
	I0920 09:43:44.046384    1759 main.go:141] libmachine: Attempt 1
	I0920 09:43:44.046460    1759 main.go:141] libmachine: Searching for 32:5c:d7:f9:f2:9d in /var/db/dhcpd_leases ...
	I0920 09:43:44.046767    1759 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0920 09:43:44.046818    1759 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eef779}
	I0920 09:43:46.047069    1759 main.go:141] libmachine: Attempt 2
	I0920 09:43:46.047139    1759 main.go:141] libmachine: Searching for 32:5c:d7:f9:f2:9d in /var/db/dhcpd_leases ...
	I0920 09:43:46.047488    1759 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0920 09:43:46.047539    1759 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eef779}
	I0920 09:43:48.049635    1759 main.go:141] libmachine: Attempt 3
	I0920 09:43:48.049688    1759 main.go:141] libmachine: Searching for 32:5c:d7:f9:f2:9d in /var/db/dhcpd_leases ...
	I0920 09:43:48.049788    1759 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0920 09:43:48.049810    1759 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eef779}
	I0920 09:43:50.051789    1759 main.go:141] libmachine: Attempt 4
	I0920 09:43:50.051801    1759 main.go:141] libmachine: Searching for 32:5c:d7:f9:f2:9d in /var/db/dhcpd_leases ...
	I0920 09:43:50.051879    1759 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0920 09:43:50.051916    1759 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eef779}
	I0920 09:43:52.053908    1759 main.go:141] libmachine: Attempt 5
	I0920 09:43:52.053914    1759 main.go:141] libmachine: Searching for 32:5c:d7:f9:f2:9d in /var/db/dhcpd_leases ...
	I0920 09:43:52.053943    1759 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0920 09:43:52.053949    1759 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eef779}
	I0920 09:43:54.055931    1759 main.go:141] libmachine: Attempt 6
	I0920 09:43:54.055947    1759 main.go:141] libmachine: Searching for 32:5c:d7:f9:f2:9d in /var/db/dhcpd_leases ...
	I0920 09:43:54.056013    1759 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0920 09:43:54.056023    1759 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eef779}
	I0920 09:43:56.058028    1759 main.go:141] libmachine: Attempt 7
	I0920 09:43:56.058052    1759 main.go:141] libmachine: Searching for 32:5c:d7:f9:f2:9d in /var/db/dhcpd_leases ...
	I0920 09:43:56.058197    1759 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0920 09:43:56.058210    1759 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:5c:d7:f9:f2:9d ID:1,32:5c:d7:f9:f2:9d Lease:0x66eef7ca}
	I0920 09:43:56.058221    1759 main.go:141] libmachine: Found match: 32:5c:d7:f9:f2:9d
	I0920 09:43:56.058229    1759 main.go:141] libmachine: IP: 192.168.105.2
	I0920 09:43:56.058234    1759 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0920 09:43:58.078288    1759 machine.go:93] provisionDockerMachine start ...
	I0920 09:43:58.079856    1759 main.go:141] libmachine: Using SSH client type: native
	I0920 09:43:58.080317    1759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050a5c00] 0x1050a8440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0920 09:43:58.080332    1759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 09:43:58.146419    1759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 09:43:58.146455    1759 buildroot.go:166] provisioning hostname "addons-649000"
	I0920 09:43:58.146606    1759 main.go:141] libmachine: Using SSH client type: native
	I0920 09:43:58.146864    1759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050a5c00] 0x1050a8440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0920 09:43:58.146876    1759 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-649000 && echo "addons-649000" | sudo tee /etc/hostname
	I0920 09:43:58.203932    1759 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-649000
	
	I0920 09:43:58.204022    1759 main.go:141] libmachine: Using SSH client type: native
	I0920 09:43:58.204167    1759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050a5c00] 0x1050a8440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0920 09:43:58.204178    1759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-649000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-649000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-649000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 09:43:58.250360    1759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 09:43:58.250372    1759 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19672-1143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19672-1143/.minikube}
	I0920 09:43:58.250385    1759 buildroot.go:174] setting up certificates
	I0920 09:43:58.250389    1759 provision.go:84] configureAuth start
	I0920 09:43:58.250392    1759 provision.go:143] copyHostCerts
	I0920 09:43:58.250485    1759 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem (1078 bytes)
	I0920 09:43:58.250718    1759 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem (1123 bytes)
	I0920 09:43:58.250829    1759 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem (1679 bytes)
	I0920 09:43:58.250911    1759 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem org=jenkins.addons-649000 san=[127.0.0.1 192.168.105.2 addons-649000 localhost minikube]
	I0920 09:43:58.315151    1759 provision.go:177] copyRemoteCerts
	I0920 09:43:58.315212    1759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 09:43:58.315229    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:43:58.340551    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 09:43:58.348698    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 09:43:58.357160    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 09:43:58.365566    1759 provision.go:87] duration metric: took 115.16225ms to configureAuth
	I0920 09:43:58.365576    1759 buildroot.go:189] setting minikube options for container-runtime
	I0920 09:43:58.365696    1759 config.go:182] Loaded profile config "addons-649000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 09:43:58.365749    1759 main.go:141] libmachine: Using SSH client type: native
	I0920 09:43:58.365841    1759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050a5c00] 0x1050a8440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0920 09:43:58.365846    1759 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 09:43:58.405140    1759 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 09:43:58.405148    1759 buildroot.go:70] root file system type: tmpfs
	I0920 09:43:58.405193    1759 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 09:43:58.405247    1759 main.go:141] libmachine: Using SSH client type: native
	I0920 09:43:58.405338    1759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050a5c00] 0x1050a8440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0920 09:43:58.405370    1759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 09:43:58.448491    1759 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 09:43:58.448547    1759 main.go:141] libmachine: Using SSH client type: native
	I0920 09:43:58.448652    1759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050a5c00] 0x1050a8440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0920 09:43:58.448660    1759 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 09:43:59.844881    1759 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 09:43:59.844894    1759 machine.go:96] duration metric: took 1.766638708s to provisionDockerMachine
	I0920 09:43:59.844900    1759 client.go:171] duration metric: took 18.466012792s to LocalClient.Create
	I0920 09:43:59.844915    1759 start.go:167] duration metric: took 18.466086792s to libmachine.API.Create "addons-649000"
	I0920 09:43:59.844921    1759 start.go:293] postStartSetup for "addons-649000" (driver="qemu2")
	I0920 09:43:59.844927    1759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 09:43:59.845010    1759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 09:43:59.845019    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:43:59.867897    1759 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 09:43:59.869565    1759 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 09:43:59.869574    1759 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/addons for local assets ...
	I0920 09:43:59.869666    1759 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/files for local assets ...
	I0920 09:43:59.869697    1759 start.go:296] duration metric: took 24.774041ms for postStartSetup
	I0920 09:43:59.870113    1759 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/config.json ...
	I0920 09:43:59.870298    1759 start.go:128] duration metric: took 18.735817584s to createHost
	I0920 09:43:59.870332    1759 main.go:141] libmachine: Using SSH client type: native
	I0920 09:43:59.870420    1759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050a5c00] 0x1050a8440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0920 09:43:59.870425    1759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 09:43:59.912814    1759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726850639.936839794
	
	I0920 09:43:59.912823    1759 fix.go:216] guest clock: 1726850639.936839794
	I0920 09:43:59.912827    1759 fix.go:229] Guest: 2024-09-20 09:43:59.936839794 -0700 PDT Remote: 2024-09-20 09:43:59.870301 -0700 PDT m=+18.850582084 (delta=66.538794ms)
	I0920 09:43:59.912839    1759 fix.go:200] guest clock delta is within tolerance: 66.538794ms
	I0920 09:43:59.912842    1759 start.go:83] releasing machines lock for "addons-649000", held for 18.778425792s
	I0920 09:43:59.913129    1759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 09:43:59.913129    1759 ssh_runner.go:195] Run: cat /version.json
	I0920 09:43:59.913163    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:43:59.913165    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:43:59.932765    1759 ssh_runner.go:195] Run: systemctl --version
	I0920 09:43:59.934881    1759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 09:43:59.982204    1759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 09:43:59.982261    1759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 09:43:59.988543    1759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 09:43:59.988560    1759 start.go:495] detecting cgroup driver to use...
	I0920 09:43:59.988716    1759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 09:43:59.995217    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 09:43:59.999127    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 09:44:00.003135    1759 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 09:44:00.003178    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 09:44:00.007144    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 09:44:00.011503    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 09:44:00.015210    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 09:44:00.019084    1759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 09:44:00.023004    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 09:44:00.027055    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 09:44:00.030831    1759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 09:44:00.034945    1759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 09:44:00.038838    1759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 09:44:00.038872    1759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 09:44:00.043224    1759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 09:44:00.046876    1759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:44:00.133843    1759 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 09:44:00.145085    1759 start.go:495] detecting cgroup driver to use...
	I0920 09:44:00.145160    1759 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 09:44:00.151276    1759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 09:44:00.156791    1759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 09:44:00.163642    1759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 09:44:00.168869    1759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 09:44:00.174049    1759 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 09:44:00.222387    1759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 09:44:00.228956    1759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 09:44:00.235727    1759 ssh_runner.go:195] Run: which cri-dockerd
	I0920 09:44:00.237237    1759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 09:44:00.240529    1759 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 09:44:00.246334    1759 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 09:44:00.332913    1759 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 09:44:00.419667    1759 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 09:44:00.419735    1759 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 09:44:00.425809    1759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:44:00.508671    1759 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 09:44:02.685135    1759 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.176521375s)
	I0920 09:44:02.685213    1759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 09:44:02.690534    1759 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 09:44:02.697340    1759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 09:44:02.702554    1759 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 09:44:02.784557    1759 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 09:44:02.868869    1759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:44:02.952886    1759 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 09:44:02.959006    1759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 09:44:02.964765    1759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:44:03.035196    1759 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 09:44:03.060658    1759 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 09:44:03.060761    1759 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 09:44:03.062828    1759 start.go:563] Will wait 60s for crictl version
	I0920 09:44:03.062879    1759 ssh_runner.go:195] Run: which crictl
	I0920 09:44:03.064439    1759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 09:44:03.085801    1759 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 09:44:03.085880    1759 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 09:44:03.114189    1759 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 09:44:03.125654    1759 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 09:44:03.125804    1759 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0920 09:44:03.127196    1759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 09:44:03.131393    1759 kubeadm.go:883] updating cluster {Name:addons-649000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-649000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 09:44:03.131443    1759 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 09:44:03.131496    1759 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 09:44:03.136716    1759 docker.go:685] Got preloaded images: 
	I0920 09:44:03.136725    1759 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0920 09:44:03.136778    1759 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 09:44:03.140189    1759 ssh_runner.go:195] Run: which lz4
	I0920 09:44:03.141495    1759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 09:44:03.142932    1759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 09:44:03.142944    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0920 09:44:04.400922    1759 docker.go:649] duration metric: took 1.259506292s to copy over tarball
	I0920 09:44:04.401029    1759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 09:44:05.365481    1759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 09:44:05.380514    1759 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 09:44:05.383966    1759 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0920 09:44:05.389484    1759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:44:05.471090    1759 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 09:44:07.667796    1759 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.196760333s)
	I0920 09:44:07.667914    1759 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 09:44:07.673688    1759 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 09:44:07.673699    1759 cache_images.go:84] Images are preloaded, skipping loading
	I0920 09:44:07.673704    1759 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0920 09:44:07.673763    1759 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-649000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-649000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 09:44:07.673834    1759 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 09:44:07.692882    1759 cni.go:84] Creating CNI manager for ""
	I0920 09:44:07.692894    1759 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 09:44:07.692899    1759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 09:44:07.692909    1759 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-649000 NodeName:addons-649000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 09:44:07.692979    1759 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-649000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 09:44:07.693052    1759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 09:44:07.696824    1759 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 09:44:07.696857    1759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 09:44:07.700157    1759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 09:44:07.705943    1759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 09:44:07.711707    1759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 09:44:07.717698    1759 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0920 09:44:07.718985    1759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 09:44:07.722973    1759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:44:07.804557    1759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 09:44:07.812273    1759 certs.go:68] Setting up /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000 for IP: 192.168.105.2
	I0920 09:44:07.812297    1759 certs.go:194] generating shared ca certs ...
	I0920 09:44:07.812309    1759 certs.go:226] acquiring lock for ca certs: {Name:mk7151e0388cf18b174fabc4929e6178a41b4c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:07.812502    1759 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key
	I0920 09:44:07.890113    1759 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt ...
	I0920 09:44:07.890125    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt: {Name:mk0795cf4795374700f91483e288170c25646ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:07.890423    1759 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key ...
	I0920 09:44:07.890433    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key: {Name:mked867ed8abafefa13e9707fed2ba86bd47316f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:07.890575    1759 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key
	I0920 09:44:08.266048    1759 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.crt ...
	I0920 09:44:08.266068    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.crt: {Name:mkb6880f17ed2842bba928600d832356ee9046c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:08.266393    1759 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key ...
	I0920 09:44:08.266397    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key: {Name:mk84a5bcc3bb1f4c4bcf04c1779b7ce760a614e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:08.266555    1759 certs.go:256] generating profile certs ...
	I0920 09:44:08.266598    1759 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.key
	I0920 09:44:08.266605    1759 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt with IP's: []
	I0920 09:44:08.419106    1759 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt ...
	I0920 09:44:08.419115    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: {Name:mk9aecaea884b98f49d3f84a9ad8932e96607934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:08.419363    1759 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.key ...
	I0920 09:44:08.419367    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.key: {Name:mk2788e9f2fe3dd7e3da43bfc38184226e30ba5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:08.419486    1759 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.key.a7348f26
	I0920 09:44:08.419496    1759 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.crt.a7348f26 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0920 09:44:08.550602    1759 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.crt.a7348f26 ...
	I0920 09:44:08.550613    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.crt.a7348f26: {Name:mkead4c924edfd90140536388c291663a095dd2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:08.550877    1759 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.key.a7348f26 ...
	I0920 09:44:08.550882    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.key.a7348f26: {Name:mkc34386986aa9ad230f36ad9a1a6f54cd176ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:08.550999    1759 certs.go:381] copying /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.crt.a7348f26 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.crt
	I0920 09:44:08.551230    1759 certs.go:385] copying /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.key.a7348f26 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.key
	I0920 09:44:08.551364    1759 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/proxy-client.key
	I0920 09:44:08.551376    1759 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/proxy-client.crt with IP's: []
	I0920 09:44:08.600331    1759 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/proxy-client.crt ...
	I0920 09:44:08.600336    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/proxy-client.crt: {Name:mkac695bd5ad45cb4f8b8ce7b32abf611ce84849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:08.600495    1759 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/proxy-client.key ...
	I0920 09:44:08.600498    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/proxy-client.key: {Name:mkb2e24bebdc93292abb24ee0433bd077016e49a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:08.600766    1759 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 09:44:08.600788    1759 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem (1078 bytes)
	I0920 09:44:08.600807    1759 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem (1123 bytes)
	I0920 09:44:08.600824    1759 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem (1679 bytes)
	I0920 09:44:08.601231    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 09:44:08.610090    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 09:44:08.618492    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 09:44:08.626683    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 09:44:08.635044    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 09:44:08.643095    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 09:44:08.651026    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 09:44:08.659036    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 09:44:08.667207    1759 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 09:44:08.675221    1759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 09:44:08.682101    1759 ssh_runner.go:195] Run: openssl version
	I0920 09:44:08.684338    1759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 09:44:08.687850    1759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 09:44:08.689297    1759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 09:44:08.689325    1759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 09:44:08.691286    1759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 09:44:08.695169    1759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 09:44:08.696668    1759 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 09:44:08.696709    1759 kubeadm.go:392] StartCluster: {Name:addons-649000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-649000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 09:44:08.696792    1759 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 09:44:08.704329    1759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 09:44:08.707813    1759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 09:44:08.711192    1759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 09:44:08.714679    1759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 09:44:08.714686    1759 kubeadm.go:157] found existing configuration files:
	
	I0920 09:44:08.714712    1759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 09:44:08.718349    1759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 09:44:08.718377    1759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 09:44:08.721988    1759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 09:44:08.725523    1759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 09:44:08.725552    1759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 09:44:08.729076    1759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 09:44:08.732239    1759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 09:44:08.732265    1759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 09:44:08.735436    1759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 09:44:08.738783    1759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 09:44:08.738807    1759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 09:44:08.742324    1759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 09:44:08.762339    1759 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 09:44:08.762432    1759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 09:44:08.799051    1759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 09:44:08.799115    1759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 09:44:08.799158    1759 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 09:44:08.803104    1759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 09:44:08.818137    1759 out.go:235]   - Generating certificates and keys ...
	I0920 09:44:08.818169    1759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 09:44:08.818212    1759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 09:44:08.869105    1759 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 09:44:08.951833    1759 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 09:44:09.056140    1759 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 09:44:09.134267    1759 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 09:44:09.263710    1759 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 09:44:09.263773    1759 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-649000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0920 09:44:09.405061    1759 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 09:44:09.405141    1759 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-649000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0920 09:44:09.481370    1759 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 09:44:09.515083    1759 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 09:44:09.617651    1759 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 09:44:09.617693    1759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 09:44:09.675913    1759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 09:44:09.834755    1759 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 09:44:09.896596    1759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 09:44:10.008554    1759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 09:44:10.207036    1759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 09:44:10.207232    1759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 09:44:10.208402    1759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 09:44:10.222432    1759 out.go:235]   - Booting up control plane ...
	I0920 09:44:10.222482    1759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 09:44:10.222522    1759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 09:44:10.222567    1759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 09:44:10.222616    1759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 09:44:10.222678    1759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 09:44:10.222705    1759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 09:44:10.300631    1759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 09:44:10.300692    1759 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 09:44:10.806647    1759 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 508.43525ms
	I0920 09:44:10.806689    1759 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 09:44:14.311069    1759 kubeadm.go:310] [api-check] The API server is healthy after 3.50396396s
	I0920 09:44:14.337860    1759 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 09:44:14.352086    1759 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 09:44:14.365562    1759 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 09:44:14.365736    1759 kubeadm.go:310] [mark-control-plane] Marking the node addons-649000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 09:44:14.370717    1759 kubeadm.go:310] [bootstrap-token] Using token: t8rjaa.tzv9yusmmaxfhtec
	I0920 09:44:14.377971    1759 out.go:235]   - Configuring RBAC rules ...
	I0920 09:44:14.378055    1759 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 09:44:14.379321    1759 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 09:44:14.385740    1759 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 09:44:14.386896    1759 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 09:44:14.388525    1759 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 09:44:14.390175    1759 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 09:44:14.719303    1759 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 09:44:15.124437    1759 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 09:44:15.718556    1759 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 09:44:15.719127    1759 kubeadm.go:310] 
	I0920 09:44:15.719161    1759 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 09:44:15.719164    1759 kubeadm.go:310] 
	I0920 09:44:15.719213    1759 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 09:44:15.719217    1759 kubeadm.go:310] 
	I0920 09:44:15.719231    1759 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 09:44:15.719269    1759 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 09:44:15.719334    1759 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 09:44:15.719341    1759 kubeadm.go:310] 
	I0920 09:44:15.719381    1759 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 09:44:15.719385    1759 kubeadm.go:310] 
	I0920 09:44:15.719412    1759 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 09:44:15.719419    1759 kubeadm.go:310] 
	I0920 09:44:15.719495    1759 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 09:44:15.719552    1759 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 09:44:15.719649    1759 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 09:44:15.719659    1759 kubeadm.go:310] 
	I0920 09:44:15.719728    1759 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 09:44:15.719818    1759 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 09:44:15.719828    1759 kubeadm.go:310] 
	I0920 09:44:15.719879    1759 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t8rjaa.tzv9yusmmaxfhtec \
	I0920 09:44:15.719995    1759 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a \
	I0920 09:44:15.720035    1759 kubeadm.go:310] 	--control-plane 
	I0920 09:44:15.720041    1759 kubeadm.go:310] 
	I0920 09:44:15.720107    1759 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 09:44:15.720115    1759 kubeadm.go:310] 
	I0920 09:44:15.720169    1759 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t8rjaa.tzv9yusmmaxfhtec \
	I0920 09:44:15.720232    1759 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a 
	I0920 09:44:15.720454    1759 kubeadm.go:310] W0920 16:44:08.785627    1585 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 09:44:15.720658    1759 kubeadm.go:310] W0920 16:44:08.785915    1585 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 09:44:15.720742    1759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 09:44:15.720753    1759 cni.go:84] Creating CNI manager for ""
	I0920 09:44:15.720765    1759 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 09:44:15.728241    1759 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 09:44:15.732367    1759 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 09:44:15.736856    1759 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 09:44:15.743258    1759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 09:44:15.743348    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:15.743362    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-649000 minikube.k8s.io/updated_at=2024_09_20T09_44_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-649000 minikube.k8s.io/primary=true
	I0920 09:44:15.749074    1759 ops.go:34] apiserver oom_adj: -16
	I0920 09:44:15.801053    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:16.303174    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:16.803187    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:17.303097    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:17.803039    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:18.303089    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:18.803284    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:19.301661    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:19.803118    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:20.302989    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:20.802958    1759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 09:44:20.849630    1759 kubeadm.go:1113] duration metric: took 5.10650125s to wait for elevateKubeSystemPrivileges
	I0920 09:44:20.849647    1759 kubeadm.go:394] duration metric: took 12.153351834s to StartCluster
	I0920 09:44:20.849658    1759 settings.go:142] acquiring lock: {Name:mkc8690df96bb5b3a10e10e028bcb5cdae886c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:20.849821    1759 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 09:44:20.850021    1759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:44:20.850307    1759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 09:44:20.850327    1759 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 09:44:20.850346    1759 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 09:44:20.850396    1759 addons.go:69] Setting yakd=true in profile "addons-649000"
	I0920 09:44:20.850400    1759 config.go:182] Loaded profile config "addons-649000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 09:44:20.850404    1759 addons.go:234] Setting addon yakd=true in "addons-649000"
	I0920 09:44:20.850416    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850438    1759 addons.go:69] Setting inspektor-gadget=true in profile "addons-649000"
	I0920 09:44:20.850443    1759 addons.go:234] Setting addon inspektor-gadget=true in "addons-649000"
	I0920 09:44:20.850452    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850466    1759 addons.go:69] Setting storage-provisioner=true in profile "addons-649000"
	I0920 09:44:20.850473    1759 addons.go:234] Setting addon storage-provisioner=true in "addons-649000"
	I0920 09:44:20.850488    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850486    1759 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-649000"
	I0920 09:44:20.850494    1759 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-649000"
	I0920 09:44:20.850522    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850703    1759 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-649000"
	I0920 09:44:20.850708    1759 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-649000"
	I0920 09:44:20.850731    1759 retry.go:31] will retry after 960.734674ms: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.850738    1759 addons.go:69] Setting registry=true in profile "addons-649000"
	I0920 09:44:20.850742    1759 addons.go:234] Setting addon registry=true in "addons-649000"
	I0920 09:44:20.850749    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850758    1759 retry.go:31] will retry after 586.070211ms: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.850764    1759 addons.go:69] Setting metrics-server=true in profile "addons-649000"
	I0920 09:44:20.850768    1759 addons.go:234] Setting addon metrics-server=true in "addons-649000"
	I0920 09:44:20.850774    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850827    1759 retry.go:31] will retry after 1.415855027s: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.850834    1759 addons.go:69] Setting volcano=true in profile "addons-649000"
	I0920 09:44:20.850856    1759 addons.go:69] Setting volumesnapshots=true in profile "addons-649000"
	I0920 09:44:20.850861    1759 addons.go:234] Setting addon volumesnapshots=true in "addons-649000"
	I0920 09:44:20.850869    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850939    1759 retry.go:31] will retry after 557.701287ms: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.850944    1759 addons.go:69] Setting default-storageclass=true in profile "addons-649000"
	I0920 09:44:20.850941    1759 addons.go:234] Setting addon volcano=true in "addons-649000"
	I0920 09:44:20.850955    1759 addons.go:69] Setting gcp-auth=true in profile "addons-649000"
	I0920 09:44:20.850962    1759 mustload.go:65] Loading cluster: addons-649000
	I0920 09:44:20.850971    1759 retry.go:31] will retry after 597.432007ms: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.850985    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850983    1759 addons.go:69] Setting ingress-dns=true in profile "addons-649000"
	I0920 09:44:20.851014    1759 addons.go:234] Setting addon ingress-dns=true in "addons-649000"
	I0920 09:44:20.851025    1759 config.go:182] Loaded profile config "addons-649000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 09:44:20.851055    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850947    1759 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-649000"
	I0920 09:44:20.851183    1759 retry.go:31] will retry after 734.583917ms: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.851064    1759 retry.go:31] will retry after 796.958075ms: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.850951    1759 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-649000"
	I0920 09:44:20.851204    1759 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-649000"
	I0920 09:44:20.851212    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850953    1759 addons.go:69] Setting ingress=true in profile "addons-649000"
	I0920 09:44:20.851236    1759 addons.go:234] Setting addon ingress=true in "addons-649000"
	I0920 09:44:20.851258    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.850949    1759 addons.go:69] Setting cloud-spanner=true in profile "addons-649000"
	I0920 09:44:20.851281    1759 addons.go:234] Setting addon cloud-spanner=true in "addons-649000"
	I0920 09:44:20.851296    1759 retry.go:31] will retry after 732.711253ms: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.851303    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.851305    1759 retry.go:31] will retry after 1.301502093s: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.851404    1759 retry.go:31] will retry after 1.280873294s: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.851408    1759 retry.go:31] will retry after 1.337232043s: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.851533    1759 retry.go:31] will retry after 1.479603676s: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.851561    1759 retry.go:31] will retry after 757.953112ms: connect: dial unix /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/monitor: connect: connection refused
	I0920 09:44:20.853682    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:20.857289    1759 out.go:177] * Verifying Kubernetes components...
	I0920 09:44:20.861246    1759 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 09:44:20.867341    1759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:44:20.871235    1759 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 09:44:20.871242    1759 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 09:44:20.871251    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:20.937452    1759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 09:44:21.037923    1759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 09:44:21.158652    1759 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 09:44:21.158668    1759 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 09:44:21.170801    1759 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 09:44:21.170818    1759 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 09:44:21.199386    1759 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 09:44:21.199398    1759 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 09:44:21.214539    1759 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 09:44:21.214550    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 09:44:21.239478    1759 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0920 09:44:21.239862    1759 node_ready.go:35] waiting up to 6m0s for node "addons-649000" to be "Ready" ...
	I0920 09:44:21.242269    1759 node_ready.go:49] node "addons-649000" has status "Ready":"True"
	I0920 09:44:21.242290    1759 node_ready.go:38] duration metric: took 2.404667ms for node "addons-649000" to be "Ready" ...
	I0920 09:44:21.242295    1759 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 09:44:21.248532    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 09:44:21.253031    1759 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:21.415083    1759 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 09:44:21.419137    1759 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 09:44:21.422093    1759 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 09:44:21.422101    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 09:44:21.422111    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:21.441032    1759 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 09:44:21.447111    1759 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 09:44:21.447123    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 09:44:21.447133    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:21.452039    1759 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 09:44:21.455142    1759 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 09:44:21.455151    1759 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 09:44:21.455162    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:21.456924    1759 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 09:44:21.456931    1759 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 09:44:21.464580    1759 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 09:44:21.464590    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 09:44:21.472309    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 09:44:21.493051    1759 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-649000 service yakd-dashboard -n yakd-dashboard
	
	I0920 09:44:21.522723    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 09:44:21.525230    1759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 09:44:21.525236    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 09:44:21.530818    1759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 09:44:21.530826    1759 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 09:44:21.536553    1759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 09:44:21.536563    1759 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 09:44:21.542362    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 09:44:21.590843    1759 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 09:44:21.597765    1759 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 09:44:21.601737    1759 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 09:44:21.605743    1759 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 09:44:21.606127    1759 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 09:44:21.606134    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 09:44:21.606143    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:21.609863    1759 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 09:44:21.609871    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 09:44:21.609877    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:21.614707    1759 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 09:44:21.618822    1759 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 09:44:21.622597    1759 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 09:44:21.626838    1759 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 09:44:21.626847    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 09:44:21.626855    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:21.652772    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 09:44:21.655671    1759 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 09:44:21.655680    1759 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 09:44:21.655692    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:21.690923    1759 addons.go:475] Verifying addon registry=true in "addons-649000"
	I0920 09:44:21.696728    1759 out.go:177] * Verifying registry addon...
	I0920 09:44:21.706076    1759 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 09:44:21.716668    1759 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 09:44:21.716675    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:21.724977    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 09:44:21.742446    1759 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-649000" context rescaled to 1 replicas
	I0920 09:44:21.768157    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 09:44:21.769763    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 09:44:21.789129    1759 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 09:44:21.789144    1759 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 09:44:21.795789    1759 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 09:44:21.795805    1759 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 09:44:21.802745    1759 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 09:44:21.802756    1759 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 09:44:21.817770    1759 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 09:44:21.819273    1759 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 09:44:21.819280    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 09:44:21.819291    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:21.843426    1759 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 09:44:21.843439    1759 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 09:44:21.900591    1759 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 09:44:21.900604    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 09:44:22.004117    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 09:44:22.042619    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 09:44:22.138854    1759 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 09:44:22.142709    1759 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 09:44:22.142723    1759 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 09:44:22.142735    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:22.155728    1759 addons.go:234] Setting addon default-storageclass=true in "addons-649000"
	I0920 09:44:22.155751    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:22.156322    1759 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 09:44:22.156330    1759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 09:44:22.156336    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:22.193709    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 09:44:22.197699    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 09:44:22.200690    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 09:44:22.204766    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 09:44:22.208650    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 09:44:22.212733    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 09:44:22.222684    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 09:44:22.232701    1759 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 09:44:22.234059    1759 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 09:44:22.234067    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:22.235760    1759 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 09:44:22.235769    1759 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 09:44:22.235779    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:22.267570    1759 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-649000"
	I0920 09:44:22.267591    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:22.270736    1759 out.go:177]   - Using image docker.io/busybox:stable
	I0920 09:44:22.280737    1759 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 09:44:22.286702    1759 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 09:44:22.286712    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 09:44:22.286723    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:22.335749    1759 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 09:44:22.341755    1759 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 09:44:22.341768    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 09:44:22.341778    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:22.355518    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 09:44:22.503457    1759 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 09:44:22.503470    1759 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 09:44:22.544196    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 09:44:22.565052    1759 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 09:44:22.565068    1759 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 09:44:22.573000    1759 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 09:44:22.573015    1759 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 09:44:22.575255    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 09:44:22.615876    1759 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 09:44:22.615889    1759 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 09:44:22.624350    1759 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 09:44:22.624364    1759 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 09:44:22.638050    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095702542s)
	I0920 09:44:22.638070    1759 addons.go:475] Verifying addon metrics-server=true in "addons-649000"
	I0920 09:44:22.731461    1759 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 09:44:22.731476    1759 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 09:44:22.735463    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:22.758463    1759 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 09:44:22.758477    1759 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 09:44:22.795722    1759 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 09:44:22.795737    1759 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 09:44:22.853281    1759 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 09:44:22.853294    1759 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 09:44:22.919533    1759 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 09:44:22.919545    1759 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 09:44:22.920967    1759 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 09:44:22.920974    1759 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 09:44:22.984038    1759 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 09:44:22.984048    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 09:44:23.034112    1759 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 09:44:23.034124    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 09:44:23.083318    1759 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 09:44:23.083334    1759 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 09:44:23.089581    1759 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 09:44:23.089590    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 09:44:23.100375    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 09:44:23.105622    1759 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 09:44:23.105633    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 09:44:23.210024    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:23.212275    1759 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 09:44:23.212284    1759 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 09:44:23.255684    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:23.343728    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 09:44:23.709953    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:24.249435    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:24.749376    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:25.239214    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:25.275257    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:25.518176    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.75012875s)
	I0920 09:44:25.518180    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.793316708s)
	I0920 09:44:25.518254    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.748609625s)
	I0920 09:44:25.518261    1759 addons.go:475] Verifying addon ingress=true in "addons-649000"
	I0920 09:44:25.518293    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.514282583s)
	I0920 09:44:25.518364    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.974258s)
	I0920 09:44:25.518334    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.475821833s)
	I0920 09:44:25.518390    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.943219792s)
	W0920 09:44:25.518417    1759 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 09:44:25.518343    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.162923125s)
	I0920 09:44:25.518430    1759 retry.go:31] will retry after 315.143796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 09:44:25.525010    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.424692584s)
	I0920 09:44:25.525085    1759 out.go:177] * Verifying ingress addon...
	I0920 09:44:25.533476    1759 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 09:44:25.568968    1759 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 09:44:25.568977    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0920 09:44:25.618468    1759 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 09:44:25.733257    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:25.841180    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 09:44:25.940954    1759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.597290583s)
	I0920 09:44:25.940974    1759 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-649000"
	I0920 09:44:25.945209    1759 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 09:44:25.951589    1759 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 09:44:25.954983    1759 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 09:44:25.954993    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:26.054925    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:26.210092    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:26.456244    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:26.556968    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:26.709869    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:26.954791    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:27.055067    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:27.210171    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:27.460393    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:27.560235    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:27.710367    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:27.760225    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:27.860471    1759 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 09:44:27.860488    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:27.888259    1759 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 09:44:27.896757    1759 addons.go:234] Setting addon gcp-auth=true in "addons-649000"
	I0920 09:44:27.896782    1759 host.go:66] Checking if "addons-649000" exists ...
	I0920 09:44:27.897508    1759 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 09:44:27.897516    1759 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/addons-649000/id_rsa Username:docker}
	I0920 09:44:27.925850    1759 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 09:44:27.929935    1759 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 09:44:27.932832    1759 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 09:44:27.932838    1759 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 09:44:27.938548    1759 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 09:44:27.938557    1759 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 09:44:27.944277    1759 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 09:44:27.944284    1759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 09:44:27.950247    1759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 09:44:27.954285    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:28.037437    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:28.143870    1759 addons.go:475] Verifying addon gcp-auth=true in "addons-649000"
	I0920 09:44:28.146975    1759 out.go:177] * Verifying gcp-auth addon...
	I0920 09:44:28.155281    1759 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 09:44:28.156441    1759 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 09:44:28.210625    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:28.454352    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:28.541003    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:28.709942    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:28.956194    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:29.037889    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:29.259029    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:29.457419    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:29.557610    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:29.708637    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:29.957467    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:30.037723    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:30.208067    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:30.257248    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:30.455803    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:30.556050    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:30.709917    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:30.954379    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:31.037287    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:31.208844    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:31.455775    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:31.536648    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:31.709492    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:31.955066    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:32.037368    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:32.209301    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:32.455927    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:32.556565    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:32.756250    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:32.759078    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 09:44:32.956316    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:33.037591    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:33.209330    1759 kapi.go:107] duration metric: took 11.503647417s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 09:44:33.458021    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:33.537697    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:33.955623    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:34.036828    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:34.456929    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:34.556710    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:34.757356    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:34.955759    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:35.036820    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:35.455772    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:35.535477    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:35.955512    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:36.037362    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:36.455725    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:36.537272    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:36.757324    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:36.956021    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:37.037277    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:37.455136    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:37.537219    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:37.956086    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:38.056722    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:38.456290    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:38.538120    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:38.956301    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:39.036914    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:39.257382    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:39.455465    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:39.537138    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:39.960634    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:40.036879    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:40.455322    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:40.555539    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:40.956473    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:41.037024    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:41.259186    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:41.455791    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:41.536903    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:41.953705    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:42.037289    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:42.455340    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:42.536993    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:42.955393    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:43.036981    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:43.455288    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:43.536869    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:43.756756    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:43.955201    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:44.037030    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:44.454713    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:44.535828    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:44.955388    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:45.036996    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:45.460985    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:45.536811    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:45.757271    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:45.955313    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:46.036931    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:46.455908    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:46.544160    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:46.956241    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:47.037899    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:47.458250    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:47.536925    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:47.955277    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:48.036884    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:48.257125    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:48.455160    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:48.556451    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:48.956905    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:49.036895    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:49.454900    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:49.534891    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:49.953975    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:50.036989    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:50.455016    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:50.536383    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:50.756900    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:50.955068    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:51.037599    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:51.456947    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:51.536531    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:51.954990    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:52.037114    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:52.455161    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:52.536690    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:52.756946    1759 pod_ready.go:103] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"False"
	I0920 09:44:52.955049    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:53.036553    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:53.455948    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:53.537355    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:53.954971    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:54.054757    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:54.256471    1759 pod_ready.go:93] pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace has status "Ready":"True"
	I0920 09:44:54.256480    1759 pod_ready.go:82] duration metric: took 33.004558583s for pod "coredns-7c65d6cfc9-skkvr" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.256484    1759 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t2cx8" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.257213    1759 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-t2cx8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-t2cx8" not found
	I0920 09:44:54.257223    1759 pod_ready.go:82] duration metric: took 735.458µs for pod "coredns-7c65d6cfc9-t2cx8" in "kube-system" namespace to be "Ready" ...
	E0920 09:44:54.257234    1759 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-t2cx8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-t2cx8" not found
	I0920 09:44:54.257239    1759 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-649000" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.259658    1759 pod_ready.go:93] pod "etcd-addons-649000" in "kube-system" namespace has status "Ready":"True"
	I0920 09:44:54.259666    1759 pod_ready.go:82] duration metric: took 2.423333ms for pod "etcd-addons-649000" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.259669    1759 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-649000" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.261748    1759 pod_ready.go:93] pod "kube-apiserver-addons-649000" in "kube-system" namespace has status "Ready":"True"
	I0920 09:44:54.261754    1759 pod_ready.go:82] duration metric: took 2.081084ms for pod "kube-apiserver-addons-649000" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.261758    1759 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-649000" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.263860    1759 pod_ready.go:93] pod "kube-controller-manager-addons-649000" in "kube-system" namespace has status "Ready":"True"
	I0920 09:44:54.263865    1759 pod_ready.go:82] duration metric: took 2.104375ms for pod "kube-controller-manager-addons-649000" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.263868    1759 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kfjsf" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.455151    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:54.455487    1759 pod_ready.go:93] pod "kube-proxy-kfjsf" in "kube-system" namespace has status "Ready":"True"
	I0920 09:44:54.455494    1759 pod_ready.go:82] duration metric: took 191.628875ms for pod "kube-proxy-kfjsf" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.455499    1759 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-649000" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.535733    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:54.857546    1759 pod_ready.go:93] pod "kube-scheduler-addons-649000" in "kube-system" namespace has status "Ready":"True"
	I0920 09:44:54.857556    1759 pod_ready.go:82] duration metric: took 402.06775ms for pod "kube-scheduler-addons-649000" in "kube-system" namespace to be "Ready" ...
	I0920 09:44:54.857560    1759 pod_ready.go:39] duration metric: took 33.61640175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 09:44:54.857571    1759 api_server.go:52] waiting for apiserver process to appear ...
	I0920 09:44:54.857634    1759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 09:44:54.866884    1759 api_server.go:72] duration metric: took 34.017697208s to wait for apiserver process to appear ...
	I0920 09:44:54.866896    1759 api_server.go:88] waiting for apiserver healthz status ...
	I0920 09:44:54.866906    1759 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0920 09:44:54.883492    1759 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0920 09:44:54.884545    1759 api_server.go:141] control plane version: v1.31.1
	I0920 09:44:54.884553    1759 api_server.go:131] duration metric: took 17.654625ms to wait for apiserver health ...
	I0920 09:44:54.884557    1759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 09:44:54.954826    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:55.037105    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:55.060899    1759 system_pods.go:59] 17 kube-system pods found
	I0920 09:44:55.060912    1759 system_pods.go:61] "coredns-7c65d6cfc9-skkvr" [d43dbe9a-d346-4413-8a64-39ab0e94770e] Running
	I0920 09:44:55.060917    1759 system_pods.go:61] "csi-hostpath-attacher-0" [dcc117da-763d-47ec-a797-ded655ae7495] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 09:44:55.060920    1759 system_pods.go:61] "csi-hostpath-resizer-0" [0fb0cb04-cb8c-40d8-a7d2-ad3fb8854366] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 09:44:55.060923    1759 system_pods.go:61] "csi-hostpathplugin-vrjk9" [d0e908a7-fb95-40b6-ba43-be89eaff0a7f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 09:44:55.060927    1759 system_pods.go:61] "etcd-addons-649000" [341d47a6-7233-4417-b77c-e43bda75855c] Running
	I0920 09:44:55.060929    1759 system_pods.go:61] "kube-apiserver-addons-649000" [a06960d7-3144-4609-977f-eb23c574ab1c] Running
	I0920 09:44:55.060931    1759 system_pods.go:61] "kube-controller-manager-addons-649000" [3408a826-00d1-482f-a0ab-109de0f8ab84] Running
	I0920 09:44:55.060933    1759 system_pods.go:61] "kube-ingress-dns-minikube" [cb372290-d533-413c-bc8a-db18543c0ad2] Running
	I0920 09:44:55.060934    1759 system_pods.go:61] "kube-proxy-kfjsf" [f3678c02-8843-4d1a-afe1-c16812e82cb9] Running
	I0920 09:44:55.060937    1759 system_pods.go:61] "kube-scheduler-addons-649000" [2a4cd4b3-f8a6-4eff-8d00-d28c776ef162] Running
	I0920 09:44:55.060938    1759 system_pods.go:61] "metrics-server-84c5f94fbc-fkn5t" [d9d67f5f-acd2-4714-afb4-90a09a30730a] Running
	I0920 09:44:55.060940    1759 system_pods.go:61] "nvidia-device-plugin-daemonset-kjgbc" [3b407700-9de5-47c9-a77d-fded909d90cf] Running
	I0920 09:44:55.060942    1759 system_pods.go:61] "registry-66c9cd494c-72d7c" [5f7510b7-98b0-47da-bdfc-1e2ed64223f4] Running
	I0920 09:44:55.060944    1759 system_pods.go:61] "registry-proxy-5jb54" [8f659e70-1afc-409c-9508-da9cd6399f18] Running
	I0920 09:44:55.060947    1759 system_pods.go:61] "snapshot-controller-56fcc65765-594p6" [4a43d705-6d2d-4647-a813-b0a7fd6e44d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 09:44:55.060949    1759 system_pods.go:61] "snapshot-controller-56fcc65765-lbxb7" [5ad78a9f-5469-4b8d-9716-d7ddd8ef3e4d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 09:44:55.060951    1759 system_pods.go:61] "storage-provisioner" [1402a2b7-4a1a-4b32-b76c-2000b20f6b78] Running
	I0920 09:44:55.060954    1759 system_pods.go:74] duration metric: took 176.400458ms to wait for pod list to return data ...
	I0920 09:44:55.060959    1759 default_sa.go:34] waiting for default service account to be created ...
	I0920 09:44:55.257481    1759 default_sa.go:45] found service account: "default"
	I0920 09:44:55.257492    1759 default_sa.go:55] duration metric: took 196.535709ms for default service account to be created ...
	I0920 09:44:55.257496    1759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 09:44:55.455232    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:55.458848    1759 system_pods.go:86] 17 kube-system pods found
	I0920 09:44:55.458858    1759 system_pods.go:89] "coredns-7c65d6cfc9-skkvr" [d43dbe9a-d346-4413-8a64-39ab0e94770e] Running
	I0920 09:44:55.458863    1759 system_pods.go:89] "csi-hostpath-attacher-0" [dcc117da-763d-47ec-a797-ded655ae7495] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 09:44:55.458866    1759 system_pods.go:89] "csi-hostpath-resizer-0" [0fb0cb04-cb8c-40d8-a7d2-ad3fb8854366] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 09:44:55.458869    1759 system_pods.go:89] "csi-hostpathplugin-vrjk9" [d0e908a7-fb95-40b6-ba43-be89eaff0a7f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 09:44:55.458871    1759 system_pods.go:89] "etcd-addons-649000" [341d47a6-7233-4417-b77c-e43bda75855c] Running
	I0920 09:44:55.458873    1759 system_pods.go:89] "kube-apiserver-addons-649000" [a06960d7-3144-4609-977f-eb23c574ab1c] Running
	I0920 09:44:55.458876    1759 system_pods.go:89] "kube-controller-manager-addons-649000" [3408a826-00d1-482f-a0ab-109de0f8ab84] Running
	I0920 09:44:55.458878    1759 system_pods.go:89] "kube-ingress-dns-minikube" [cb372290-d533-413c-bc8a-db18543c0ad2] Running
	I0920 09:44:55.458881    1759 system_pods.go:89] "kube-proxy-kfjsf" [f3678c02-8843-4d1a-afe1-c16812e82cb9] Running
	I0920 09:44:55.458883    1759 system_pods.go:89] "kube-scheduler-addons-649000" [2a4cd4b3-f8a6-4eff-8d00-d28c776ef162] Running
	I0920 09:44:55.458885    1759 system_pods.go:89] "metrics-server-84c5f94fbc-fkn5t" [d9d67f5f-acd2-4714-afb4-90a09a30730a] Running
	I0920 09:44:55.458887    1759 system_pods.go:89] "nvidia-device-plugin-daemonset-kjgbc" [3b407700-9de5-47c9-a77d-fded909d90cf] Running
	I0920 09:44:55.458889    1759 system_pods.go:89] "registry-66c9cd494c-72d7c" [5f7510b7-98b0-47da-bdfc-1e2ed64223f4] Running
	I0920 09:44:55.458891    1759 system_pods.go:89] "registry-proxy-5jb54" [8f659e70-1afc-409c-9508-da9cd6399f18] Running
	I0920 09:44:55.458894    1759 system_pods.go:89] "snapshot-controller-56fcc65765-594p6" [4a43d705-6d2d-4647-a813-b0a7fd6e44d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 09:44:55.458897    1759 system_pods.go:89] "snapshot-controller-56fcc65765-lbxb7" [5ad78a9f-5469-4b8d-9716-d7ddd8ef3e4d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 09:44:55.458900    1759 system_pods.go:89] "storage-provisioner" [1402a2b7-4a1a-4b32-b76c-2000b20f6b78] Running
	I0920 09:44:55.458903    1759 system_pods.go:126] duration metric: took 201.411541ms to wait for k8s-apps to be running ...
	I0920 09:44:55.458908    1759 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 09:44:55.458991    1759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 09:44:55.465737    1759 system_svc.go:56] duration metric: took 6.825167ms WaitForService to wait for kubelet
	I0920 09:44:55.465749    1759 kubeadm.go:582] duration metric: took 34.616586792s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 09:44:55.465760    1759 node_conditions.go:102] verifying NodePressure condition ...
	I0920 09:44:55.555108    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:55.657938    1759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 09:44:55.657948    1759 node_conditions.go:123] node cpu capacity is 2
	I0920 09:44:55.657954    1759 node_conditions.go:105] duration metric: took 192.19775ms to run NodePressure ...
	I0920 09:44:55.657960    1759 start.go:241] waiting for startup goroutines ...
	I0920 09:44:55.953658    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:56.036193    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:56.454803    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:56.536588    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:56.954904    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:57.036339    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:57.454989    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:57.536433    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:57.954754    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:58.036753    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:58.455406    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:58.536506    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:58.957390    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:59.038634    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:59.455253    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:44:59.535489    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:44:59.955137    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:00.036981    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:00.454810    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:00.536125    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:00.955073    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:01.036431    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:01.454800    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:01.535761    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:01.955316    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:02.035080    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:02.457360    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:02.539244    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:02.954478    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:03.035129    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:03.454666    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:03.536291    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:03.954467    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:04.054483    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:04.454751    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:04.536075    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:04.956709    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:05.036651    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:05.457800    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:05.538390    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:05.953666    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:06.036499    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:06.454656    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:06.556457    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:06.954746    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:07.036496    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:07.457651    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:07.535035    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:07.955193    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:08.035864    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:08.454816    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:08.536014    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:08.954658    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:09.035706    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:09.454923    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:09.535372    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:09.955439    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:10.036681    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:10.454491    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:10.536040    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:10.954648    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:11.034612    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:11.454273    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:11.535960    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:11.954608    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:12.054940    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:12.456758    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:12.537667    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:12.954277    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:13.034441    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:13.454611    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:13.535587    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:13.954284    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:14.035696    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:14.455515    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:14.537801    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:14.954985    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:15.036126    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:15.454155    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:15.536092    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:15.954184    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:16.035988    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:16.454464    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:16.554348    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:16.955171    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:17.035506    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:17.456153    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:17.534548    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:17.954073    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:18.035665    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:18.455075    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:18.536758    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:18.954539    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:19.035925    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:19.454366    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:19.534586    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:19.954485    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:20.036336    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:20.455031    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:20.535595    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:20.954275    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:21.054902    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:21.454191    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:21.535387    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:21.954075    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:22.034073    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:22.454200    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:22.535482    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:22.954301    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:23.036145    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:23.454094    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:23.535294    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:23.953349    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:24.036128    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:24.453891    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:24.553815    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:24.952572    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 09:45:25.035649    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:25.453925    1759 kapi.go:107] duration metric: took 59.504358042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 09:45:25.535660    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:26.042419    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:26.538323    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:27.037259    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:27.542586    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:28.042209    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:28.535590    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:29.036531    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:29.535816    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:30.035533    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:30.535821    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:31.034534    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:31.535010    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:32.035497    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:32.535548    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:33.034703    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:33.535448    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:34.034092    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:34.535434    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:35.035549    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:35.533821    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:36.035483    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:36.536511    1759 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 09:45:37.035369    1759 kapi.go:107] duration metric: took 1m11.504324042s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 09:45:51.156098    1759 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 09:45:51.156107    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:51.659921    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:52.155736    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:52.657733    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:53.155894    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:53.659098    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:54.156964    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:54.661261    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:55.155244    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:55.656315    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:56.160506    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:56.656715    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:57.159533    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:57.658572    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:58.158140    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:58.661173    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:59.163490    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:45:59.662603    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:00.161118    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:00.663420    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:01.161151    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:01.664891    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:02.166965    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:02.665604    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:03.163944    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:03.665469    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:04.166217    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:04.664983    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:05.162609    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:05.662842    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:06.163748    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:06.663549    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:07.165189    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:07.666414    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:08.165855    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:08.666388    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:09.167864    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:09.666382    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:10.170733    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:10.667533    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:11.168564    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:11.668958    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:12.171342    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:12.669018    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:13.168738    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:13.671541    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:14.170330    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:14.669264    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:15.167896    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:15.668452    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:16.169040    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:16.670651    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:17.169802    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:17.670694    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:18.170132    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:18.670364    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:19.176159    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:19.668287    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:20.171396    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:20.671510    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:21.171245    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:21.671457    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:22.175013    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:22.671103    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:23.172369    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:23.672686    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:24.182005    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:24.671383    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:25.170269    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:25.670657    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:26.172804    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:26.672103    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:27.172972    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:27.671529    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:28.175218    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:28.672227    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:29.176368    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:29.671859    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:30.173505    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:30.672500    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:31.182841    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:31.671342    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:32.174032    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:32.671441    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:33.173455    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:33.673591    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:34.172298    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:34.677597    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:35.172189    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:35.672557    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:36.173420    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:36.670617    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:37.176694    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:37.719737    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:38.177661    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:38.673756    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:39.176549    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:39.672984    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:40.177736    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:40.671471    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:41.172533    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:41.672056    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:42.174024    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:42.673477    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:43.172208    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:43.674815    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:44.173685    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:44.673302    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:45.172399    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:45.672983    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:46.174380    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:46.672878    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:47.176823    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:47.675065    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:48.176010    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:48.673534    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:49.177106    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:49.674206    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:50.173230    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:50.671861    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:51.172063    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:51.671751    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:52.173828    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:52.673150    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:53.177510    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:53.675917    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:54.173102    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:54.674766    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:55.172422    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:55.672148    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:56.170737    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:56.671888    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:57.172241    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:57.673447    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:58.171779    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:58.671677    1759 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 09:46:59.171755    1759 kapi.go:107] duration metric: took 2m31.003612708s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 09:46:59.175567    1759 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-649000 cluster.
	I0920 09:46:59.178537    1759 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 09:46:59.183481    1759 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 09:46:59.187529    1759 out.go:177] * Enabled addons: yakd, nvidia-device-plugin, metrics-server, ingress-dns, volcano, storage-provisioner, cloud-spanner, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 09:46:59.191561    1759 addons.go:510] duration metric: took 2m38.328605375s for enable addons: enabled=[yakd nvidia-device-plugin metrics-server ingress-dns volcano storage-provisioner cloud-spanner inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 09:46:59.191578    1759 start.go:246] waiting for cluster config update ...
	I0920 09:46:59.191594    1759 start.go:255] writing updated cluster config ...
	I0920 09:46:59.192027    1759 ssh_runner.go:195] Run: rm -f paused
	I0920 09:46:59.341528    1759 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0920 09:46:59.344556    1759 out.go:201] 
	W0920 09:46:59.347614    1759 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0920 09:46:59.350505    1759 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0920 09:46:59.358519    1759 out.go:177] * Done! kubectl is now configured to use "addons-649000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 16:56:44 addons-649000 cri-dockerd[1173]: time="2024-09-20T16:56:44Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Sep 20 16:56:44 addons-649000 dockerd[1277]: time="2024-09-20T16:56:44.882655237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 20 16:56:44 addons-649000 dockerd[1277]: time="2024-09-20T16:56:44.882986064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 20 16:56:44 addons-649000 dockerd[1277]: time="2024-09-20T16:56:44.882998230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 16:56:44 addons-649000 dockerd[1277]: time="2024-09-20T16:56:44.883060229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 16:56:49 addons-649000 dockerd[1271]: time="2024-09-20T16:56:49.305077343Z" level=info msg="ignoring event" container=73d530324680e57b0c9044177aacb1bb0df8c490716a3d1c7b1d4605acf3efc9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.305592165Z" level=info msg="shim disconnected" id=73d530324680e57b0c9044177aacb1bb0df8c490716a3d1c7b1d4605acf3efc9 namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.305697288Z" level=warning msg="cleaning up after shim disconnected" id=73d530324680e57b0c9044177aacb1bb0df8c490716a3d1c7b1d4605acf3efc9 namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.305709287Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.459611498Z" level=info msg="shim disconnected" id=b8732ca30053b060b4f2fcb2e4f112af5a1bf67776215f7c9ad978f19dfa014f namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.459681955Z" level=warning msg="cleaning up after shim disconnected" id=b8732ca30053b060b4f2fcb2e4f112af5a1bf67776215f7c9ad978f19dfa014f namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.459699788Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1271]: time="2024-09-20T16:56:49.459826327Z" level=info msg="ignoring event" container=b8732ca30053b060b4f2fcb2e4f112af5a1bf67776215f7c9ad978f19dfa014f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.475054251Z" level=info msg="shim disconnected" id=c885c97bba770695c4a534f9e818e6a0c4303d9bc132a8d9924237d6b6917a6e namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.475087500Z" level=warning msg="cleaning up after shim disconnected" id=c885c97bba770695c4a534f9e818e6a0c4303d9bc132a8d9924237d6b6917a6e namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.475091917Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1271]: time="2024-09-20T16:56:49.475188248Z" level=info msg="ignoring event" container=c885c97bba770695c4a534f9e818e6a0c4303d9bc132a8d9924237d6b6917a6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:49 addons-649000 dockerd[1271]: time="2024-09-20T16:56:49.569687728Z" level=info msg="ignoring event" container=17954b231a8c86dedc830c489b3e8876b7f4be0c99c794911f46b95d46c7849e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.570543877Z" level=info msg="shim disconnected" id=17954b231a8c86dedc830c489b3e8876b7f4be0c99c794911f46b95d46c7849e namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.570577584Z" level=warning msg="cleaning up after shim disconnected" id=17954b231a8c86dedc830c489b3e8876b7f4be0c99c794911f46b95d46c7849e namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.570582709Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1271]: time="2024-09-20T16:56:49.592808776Z" level=info msg="ignoring event" container=c496f970a935b3ac4d5701919f10d45506a41c6539f7fea242b10f5fd1915890 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.592934523Z" level=info msg="shim disconnected" id=c496f970a935b3ac4d5701919f10d45506a41c6539f7fea242b10f5fd1915890 namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.592977231Z" level=warning msg="cleaning up after shim disconnected" id=c496f970a935b3ac4d5701919f10d45506a41c6539f7fea242b10f5fd1915890 namespace=moby
	Sep 20 16:56:49 addons-649000 dockerd[1277]: time="2024-09-20T16:56:49.593004480Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	2017ed7634ab7       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                                                                5 seconds ago       Running             task-pv-container                        0                   4e99e2bf825ab       task-pv-pod
	4f087583eea37       fc9db2894f4e4                                                                                                                                26 seconds ago      Exited              helper-pod                               0                   42e88b9b5ca1b       helper-pod-delete-pvc-22cbbd5a-51bb-437b-855a-250da94f44d8
	1696e76fd9c49       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            28 seconds ago      Exited              gadget                                   7                   92306922b186d       gadget-6xg4l
	d21c228d72ece       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                                              29 seconds ago      Exited              busybox                                  0                   9b71f57dd6688       test-local-path
	f6e697d12dcc8       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              34 seconds ago      Exited              helper-pod                               0                   fe5dade4aedc5       helper-pod-create-pvc-22cbbd5a-51bb-437b-855a-250da94f44d8
	fc7e8e213f51b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   ff2f8a0aa713c       gcp-auth-89d5ffd79-t8v9j
	7ffaf9efbda99       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   1e051a61a8479       ingress-nginx-controller-bc57996ff-rjnvr
	20df7a2fd394b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   0f53548e34c4b       csi-hostpathplugin-vrjk9
	2c3b1fe83c976       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   0f53548e34c4b       csi-hostpathplugin-vrjk9
	0e5b8fca87860       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   0f53548e34c4b       csi-hostpathplugin-vrjk9
	0bf639076ac0c       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   0f53548e34c4b       csi-hostpathplugin-vrjk9
	fa89b29dfc47e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   0f53548e34c4b       csi-hostpathplugin-vrjk9
	31e797485049c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   80ba92865eec3       csi-hostpath-attacher-0
	f858d0d6ade72       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   c3a2e235d7aa2       csi-hostpath-resizer-0
	162cf31c9bc46       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   0f53548e34c4b       csi-hostpathplugin-vrjk9
	d7846f23078c3       420193b27261a                                                                                                                                11 minutes ago      Exited              patch                                    1                   ba8cf080524d2       ingress-nginx-admission-patch-jwlsc
	f7f5cdd8ae94a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   b60e6928c9b67       ingress-nginx-admission-create-whlw7
	860cae23fd744       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   bc18a56195af9       snapshot-controller-56fcc65765-lbxb7
	99e943e415ff5       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   91ef050972818       snapshot-controller-56fcc65765-594p6
	56101c8ec043a       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago      Running             local-path-provisioner                   0                   1c08e787963e7       local-path-provisioner-86d989889c-7ll9s
	2ed8c21b2df74       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   7eb974a2adab9       kube-ingress-dns-minikube
	c885c97bba770       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              12 minutes ago      Exited              registry-proxy                           0                   c496f970a935b       registry-proxy-5jb54
	b8732ca30053b       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             12 minutes ago      Exited              registry                                 0                   17954b231a8c8       registry-66c9cd494c-72d7c
	9f75b851d291d       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   a5c0f750efb9e       storage-provisioner
	f2acc4c22c637       24a140c548c07                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   a014a2d07b7a3       kube-proxy-kfjsf
	46abc0520e9fd       2f6c962e7b831                                                                                                                                12 minutes ago      Running             coredns                                  0                   d8d6f52abf7a5       coredns-7c65d6cfc9-skkvr
	e4ee15b2f2e99       d3f53a98c0a9d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   3c5855db2369b       kube-apiserver-addons-649000
	ab979086fbefb       7f8aa378bb47d                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   3ac0df1b4129f       kube-scheduler-addons-649000
	af544ec978c14       27e3830e14027                                                                                                                                12 minutes ago      Running             etcd                                     0                   2d3a913879fc3       etcd-addons-649000
	21835ccf63857       279f381cb3736                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   2e02fac9a2614       kube-controller-manager-addons-649000
	
	
	==> controller_ingress [7ffaf9efbda9] <==
	I0920 16:45:36.395752       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0920 16:45:36.464892       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0920 16:45:36.470893       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0920 16:45:36.474180       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0920 16:45:36.478121       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a80527a0-af3b-4acb-ac0c-3480f8da418c", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0920 16:45:36.479869       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"3a06a8e7-4287-4307-8f8a-4441f02f64c4", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0920 16:45:36.479890       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"fa105b34-2d78-4a40-a1a3-e2cce9cad292", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0920 16:45:37.676511       7 nginx.go:317] "Starting NGINX process"
	I0920 16:45:37.676930       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0920 16:45:37.677264       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0920 16:45:37.677559       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0920 16:45:37.689456       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0920 16:45:37.689611       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-rjnvr"
	I0920 16:45:37.693225       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-rjnvr" node="addons-649000"
	I0920 16:45:37.708823       7 controller.go:213] "Backend successfully reloaded"
	I0920 16:45:37.708867       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0920 16:45:37.708920       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-rjnvr", UID:"4ef2a103-c100-49bc-bf59-760968c32c9a", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [46abc0520e9f] <==
	Trace[809922685]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:44:51.349)
	Trace[809922685]: [30.00050994s] [30.00050994s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:41553 - 47975 "HINFO IN 5464161629907943101.6217522751779396035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010784059s
	[INFO] 10.244.0.6:37958 - 44455 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000136162s
	[INFO] 10.244.0.6:37958 - 29856 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035593s
	[INFO] 10.244.0.6:59041 - 39434 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000914s
	[INFO] 10.244.0.6:59041 - 26636 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121283s
	[INFO] 10.244.0.6:37839 - 63560 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033843s
	[INFO] 10.244.0.6:37839 - 65353 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003205s
	[INFO] 10.244.0.6:58703 - 26801 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028966s
	[INFO] 10.244.0.6:58703 - 13232 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000025799s
	[INFO] 10.244.0.6:39358 - 40311 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000020506s
	[INFO] 10.244.0.6:39358 - 63604 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000021256s
	[INFO] 10.244.0.25:39409 - 26917 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001790402s
	[INFO] 10.244.0.25:48476 - 32870 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001794694s
	[INFO] 10.244.0.25:60487 - 36893 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000038758s
	[INFO] 10.244.0.25:58995 - 53365 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000033007s
	[INFO] 10.244.0.25:51427 - 39038 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031464s
	[INFO] 10.244.0.25:51324 - 32495 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000024088s
	[INFO] 10.244.0.25:58999 - 52487 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002146223s
	[INFO] 10.244.0.25:52586 - 48284 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.002007445s
	
	
	==> describe nodes <==
	Name:               addons-649000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-649000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=addons-649000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T09_44_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-649000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-649000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 16:44:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-649000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 16:56:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 16:55:50 +0000   Fri, 20 Sep 2024 16:44:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 16:55:50 +0000   Fri, 20 Sep 2024 16:44:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 16:55:50 +0000   Fri, 20 Sep 2024 16:44:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 16:55:50 +0000   Fri, 20 Sep 2024 16:44:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-649000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 15329a504fd44ffe84ad21302965db65
	  System UUID:                15329a504fd44ffe84ad21302965db65
	  Boot ID:                    5dace439-66ce-4569-889c-45c196cb70b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  gadget                      gadget-6xg4l                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-t8v9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-rjnvr    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-skkvr                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-vrjk9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-649000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-649000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-649000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kfjsf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-649000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-594p6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-lbxb7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-7ll9s     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node addons-649000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node addons-649000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node addons-649000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node addons-649000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-649000 event: Registered Node addons-649000 in Controller
	
	
	==> dmesg <==
	[  +8.208795] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.851379] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.058295] kauditd_printk_skb: 9 callbacks suppressed
	[Sep20 16:45] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.329435] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.850045] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.742490] kauditd_printk_skb: 26 callbacks suppressed
	[ +13.948654] kauditd_printk_skb: 22 callbacks suppressed
	[ +11.974950] kauditd_printk_skb: 16 callbacks suppressed
	[Sep20 16:46] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.683484] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.161082] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:47] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.164395] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.263163] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.191701] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:51] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:55] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.191500] kauditd_printk_skb: 15 callbacks suppressed
	[Sep20 16:56] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.310552] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.844608] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.017171] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.130310] kauditd_printk_skb: 14 callbacks suppressed
	[ +20.070064] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [af544ec978c1] <==
	{"level":"info","ts":"2024-09-20T16:44:11.540469Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-20T16:44:11.540744Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-20T16:44:11.928355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T16:44:11.928417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T16:44:11.928470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-20T16:44:11.928505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:11.928518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:11.928527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:11.928536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:11.936380Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:11.936634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:11.936666Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:11.936818Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:11.936705Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-649000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T16:44:11.936792Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T16:44:11.936797Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T16:44:11.936804Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T16:44:11.940512Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T16:44:11.940920Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T16:44:11.941423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T16:44:11.964571Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T16:44:11.964994Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-20T16:54:12.086896Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1835}
	{"level":"info","ts":"2024-09-20T16:54:12.174255Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1835,"took":"83.761535ms","hash":171433849,"current-db-size-bytes":8720384,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4628480,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-20T16:54:12.174287Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":171433849,"revision":1835,"compact-revision":-1}
	
	
	==> gcp-auth [fc7e8e213f51] <==
	2024/09/20 16:47:14 Ready to write response ...
	2024/09/20 16:47:15 Ready to marshal response ...
	2024/09/20 16:47:15 Ready to write response ...
	2024/09/20 16:47:37 Ready to marshal response ...
	2024/09/20 16:47:37 Ready to write response ...
	2024/09/20 16:47:37 Ready to marshal response ...
	2024/09/20 16:47:37 Ready to write response ...
	2024/09/20 16:47:37 Ready to marshal response ...
	2024/09/20 16:47:37 Ready to write response ...
	2024/09/20 16:55:39 Ready to marshal response ...
	2024/09/20 16:55:39 Ready to write response ...
	2024/09/20 16:55:39 Ready to marshal response ...
	2024/09/20 16:55:39 Ready to write response ...
	2024/09/20 16:55:39 Ready to marshal response ...
	2024/09/20 16:55:39 Ready to write response ...
	2024/09/20 16:55:49 Ready to marshal response ...
	2024/09/20 16:55:49 Ready to write response ...
	2024/09/20 16:56:13 Ready to marshal response ...
	2024/09/20 16:56:13 Ready to write response ...
	2024/09/20 16:56:13 Ready to marshal response ...
	2024/09/20 16:56:13 Ready to write response ...
	2024/09/20 16:56:23 Ready to marshal response ...
	2024/09/20 16:56:23 Ready to write response ...
	2024/09/20 16:56:43 Ready to marshal response ...
	2024/09/20 16:56:43 Ready to write response ...
	
	
	==> kernel <==
	 16:56:49 up 12 min,  0 users,  load average: 1.43, 0.86, 0.49
	Linux addons-649000 5.10.207 #1 SMP PREEMPT Fri Sep 20 00:11:22 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e4ee15b2f2e9] <==
	E0920 16:45:51.164089       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.58.143:443: connect: connection refused" logger="UnhandledError"
	W0920 16:46:31.233464       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.58.143:443: connect: connection refused
	E0920 16:46:31.233509       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.58.143:443: connect: connection refused" logger="UnhandledError"
	W0920 16:46:31.234732       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.58.143:443: connect: connection refused
	E0920 16:46:31.234761       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.58.143:443: connect: connection refused" logger="UnhandledError"
	I0920 16:47:14.673835       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 16:47:14.686075       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0920 16:47:28.050224       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0920 16:47:28.060780       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0920 16:47:28.195526       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 16:47:28.211932       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 16:47:28.231426       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0920 16:47:28.430859       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 16:47:28.459667       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 16:47:28.461812       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 16:47:28.555258       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 16:47:29.240748       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 16:47:29.244135       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 16:47:29.451967       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 16:47:29.461015       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 16:47:29.554835       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 16:47:29.555492       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 16:47:29.585686       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 16:55:39.357779       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.108.229"}
	I0920 16:56:42.082748       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [21835ccf6385] <==
	E0920 16:55:50.617124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:00.668063       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0920 16:56:01.881955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="2.167µs"
	W0920 16:56:05.250383       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:05.250513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:06.978769       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:06.978855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:10.435851       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:10.435940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:11.956245       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0920 16:56:18.398149       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:18.398201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:25.495897       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:25.496016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:28.608107       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:28.608253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:28.904152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="2.834µs"
	I0920 16:56:34.114557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="2.708µs"
	W0920 16:56:41.587905       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:41.588008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:43.574034       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:43.574080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:46.075040       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:46.075089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:49.420922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.083µs"
	
	
	==> kube-proxy [f2acc4c22c63] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 16:44:21.440223       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 16:44:21.450510       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0920 16:44:21.450557       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 16:44:21.465161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 16:44:21.465184       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 16:44:21.465200       1 server_linux.go:169] "Using iptables Proxier"
	I0920 16:44:21.466296       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 16:44:21.466473       1 server.go:483] "Version info" version="v1.31.1"
	I0920 16:44:21.466483       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 16:44:21.467320       1 config.go:199] "Starting service config controller"
	I0920 16:44:21.467472       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 16:44:21.467570       1 config.go:105] "Starting endpoint slice config controller"
	I0920 16:44:21.467581       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 16:44:21.467970       1 config.go:328] "Starting node config controller"
	I0920 16:44:21.467975       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 16:44:21.567685       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 16:44:21.567709       1 shared_informer.go:320] Caches are synced for service config
	I0920 16:44:21.568905       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ab979086fbef] <==
	W0920 16:44:12.754139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 16:44:12.754143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:12.754157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 16:44:12.754165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:12.754184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 16:44:12.754192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:12.754207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 16:44:12.754213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:12.754228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 16:44:12.754232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:12.754313       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 16:44:12.754325       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 16:44:13.594572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 16:44:13.594706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:13.611606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:13.611663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:13.698116       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 16:44:13.698448       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 16:44:13.710419       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 16:44:13.710650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:13.736017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 16:44:13.736150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:13.790447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:13.790581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 16:44:15.952189       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 16:56:43 addons-649000 kubelet[2038]: E0920 16:56:43.381873    2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80649275-55c2-479d-b8c1-da30431069df" containerName="helper-pod"
	Sep 20 16:56:43 addons-649000 kubelet[2038]: E0920 16:56:43.381892    2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9d67f5f-acd2-4714-afb4-90a09a30730a" containerName="metrics-server"
	Sep 20 16:56:43 addons-649000 kubelet[2038]: I0920 16:56:43.381928    2038 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d67f5f-acd2-4714-afb4-90a09a30730a" containerName="metrics-server"
	Sep 20 16:56:43 addons-649000 kubelet[2038]: I0920 16:56:43.381946    2038 memory_manager.go:354] "RemoveStaleState removing state" podUID="80649275-55c2-479d-b8c1-da30431069df" containerName="helper-pod"
	Sep 20 16:56:43 addons-649000 kubelet[2038]: I0920 16:56:43.381959    2038 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73d6e89-1df4-4d6a-828a-6776ceb925df" containerName="cloud-spanner-emulator"
	Sep 20 16:56:43 addons-649000 kubelet[2038]: I0920 16:56:43.572158    2038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c51263aa-00b6-4fbd-acba-d3b77c57bb7d-gcp-creds\") pod \"task-pv-pod\" (UID: \"c51263aa-00b6-4fbd-acba-d3b77c57bb7d\") " pod="default/task-pv-pod"
	Sep 20 16:56:43 addons-649000 kubelet[2038]: I0920 16:56:43.572213    2038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9j5d\" (UniqueName: \"kubernetes.io/projected/c51263aa-00b6-4fbd-acba-d3b77c57bb7d-kube-api-access-c9j5d\") pod \"task-pv-pod\" (UID: \"c51263aa-00b6-4fbd-acba-d3b77c57bb7d\") " pod="default/task-pv-pod"
	Sep 20 16:56:43 addons-649000 kubelet[2038]: I0920 16:56:43.572242    2038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fcbcb59f-b16a-4e51-b81d-f81ff7fad27d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5014a7f6-7771-11ef-ae68-5274df13b52b\") pod \"task-pv-pod\" (UID: \"c51263aa-00b6-4fbd-acba-d3b77c57bb7d\") " pod="default/task-pv-pod"
	Sep 20 16:56:43 addons-649000 kubelet[2038]: I0920 16:56:43.696594    2038 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fcbcb59f-b16a-4e51-b81d-f81ff7fad27d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5014a7f6-7771-11ef-ae68-5274df13b52b\") pod \"task-pv-pod\" (UID: \"c51263aa-00b6-4fbd-acba-d3b77c57bb7d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/8c0794e1ee3bdcc73a2f448143546a8f55bbb540bce710753a2a5ba80c0ce109/globalmount\"" pod="default/task-pv-pod"
	Sep 20 16:56:44 addons-649000 kubelet[2038]: E0920 16:56:44.007484    2038 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="8d90da50-fe4b-4e54-ab29-6132e709bdf6"
	Sep 20 16:56:44 addons-649000 kubelet[2038]: I0920 16:56:44.085128    2038 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e99e2bf825ab89787cc8d87157724bac3dab019d2abd9d7bcaf27c4714d9388"
	Sep 20 16:56:49 addons-649000 kubelet[2038]: E0920 16:56:49.005718    2038 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="aa2e7834-5b9b-4f36-a86f-8248f5ecf93b"
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.233160    2038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=5.496340789 podStartE2EDuration="6.23314704s" podCreationTimestamp="2024-09-20 16:56:43 +0000 UTC" firstStartedPulling="2024-09-20 16:56:44.107013969 +0000 UTC m=+749.161763585" lastFinishedPulling="2024-09-20 16:56:44.843820177 +0000 UTC m=+749.898569836" observedRunningTime="2024-09-20 16:56:45.101058723 +0000 UTC m=+750.155808340" watchObservedRunningTime="2024-09-20 16:56:49.23314704 +0000 UTC m=+754.287896657"
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.427596    2038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gnpm\" (UniqueName: \"kubernetes.io/projected/8d90da50-fe4b-4e54-ab29-6132e709bdf6-kube-api-access-6gnpm\") pod \"8d90da50-fe4b-4e54-ab29-6132e709bdf6\" (UID: \"8d90da50-fe4b-4e54-ab29-6132e709bdf6\") "
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.427622    2038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8d90da50-fe4b-4e54-ab29-6132e709bdf6-gcp-creds\") pod \"8d90da50-fe4b-4e54-ab29-6132e709bdf6\" (UID: \"8d90da50-fe4b-4e54-ab29-6132e709bdf6\") "
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.427676    2038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d90da50-fe4b-4e54-ab29-6132e709bdf6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8d90da50-fe4b-4e54-ab29-6132e709bdf6" (UID: "8d90da50-fe4b-4e54-ab29-6132e709bdf6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.431485    2038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d90da50-fe4b-4e54-ab29-6132e709bdf6-kube-api-access-6gnpm" (OuterVolumeSpecName: "kube-api-access-6gnpm") pod "8d90da50-fe4b-4e54-ab29-6132e709bdf6" (UID: "8d90da50-fe4b-4e54-ab29-6132e709bdf6"). InnerVolumeSpecName "kube-api-access-6gnpm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.527936    2038 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6gnpm\" (UniqueName: \"kubernetes.io/projected/8d90da50-fe4b-4e54-ab29-6132e709bdf6-kube-api-access-6gnpm\") on node \"addons-649000\" DevicePath \"\""
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.527951    2038 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8d90da50-fe4b-4e54-ab29-6132e709bdf6-gcp-creds\") on node \"addons-649000\" DevicePath \"\""
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.729344    2038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj75v\" (UniqueName: \"kubernetes.io/projected/8f659e70-1afc-409c-9508-da9cd6399f18-kube-api-access-nj75v\") pod \"8f659e70-1afc-409c-9508-da9cd6399f18\" (UID: \"8f659e70-1afc-409c-9508-da9cd6399f18\") "
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.729369    2038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7hn8\" (UniqueName: \"kubernetes.io/projected/5f7510b7-98b0-47da-bdfc-1e2ed64223f4-kube-api-access-r7hn8\") pod \"5f7510b7-98b0-47da-bdfc-1e2ed64223f4\" (UID: \"5f7510b7-98b0-47da-bdfc-1e2ed64223f4\") "
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.729996    2038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7510b7-98b0-47da-bdfc-1e2ed64223f4-kube-api-access-r7hn8" (OuterVolumeSpecName: "kube-api-access-r7hn8") pod "5f7510b7-98b0-47da-bdfc-1e2ed64223f4" (UID: "5f7510b7-98b0-47da-bdfc-1e2ed64223f4"). InnerVolumeSpecName "kube-api-access-r7hn8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.730174    2038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f659e70-1afc-409c-9508-da9cd6399f18-kube-api-access-nj75v" (OuterVolumeSpecName: "kube-api-access-nj75v") pod "8f659e70-1afc-409c-9508-da9cd6399f18" (UID: "8f659e70-1afc-409c-9508-da9cd6399f18"). InnerVolumeSpecName "kube-api-access-nj75v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.829967    2038 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nj75v\" (UniqueName: \"kubernetes.io/projected/8f659e70-1afc-409c-9508-da9cd6399f18-kube-api-access-nj75v\") on node \"addons-649000\" DevicePath \"\""
	Sep 20 16:56:49 addons-649000 kubelet[2038]: I0920 16:56:49.829984    2038 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r7hn8\" (UniqueName: \"kubernetes.io/projected/5f7510b7-98b0-47da-bdfc-1e2ed64223f4-kube-api-access-r7hn8\") on node \"addons-649000\" DevicePath \"\""
	
	
	==> storage-provisioner [9f75b851d291] <==
	I0920 16:44:24.376796       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 16:44:24.511930       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 16:44:24.511971       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 16:44:24.539179       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 16:44:24.539263       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-649000_9e9466ce-86e6-4af1-bb23-0ac025062b49!
	I0920 16:44:24.540304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5deab8e2-b739-49cb-9662-393757149f7e", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-649000_9e9466ce-86e6-4af1-bb23-0ac025062b49 became leader
	I0920 16:44:24.644002       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-649000_9e9466ce-86e6-4af1-bb23-0ac025062b49!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-649000 -n addons-649000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-649000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-whlw7 ingress-nginx-admission-patch-jwlsc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-649000 describe pod busybox ingress-nginx-admission-create-whlw7 ingress-nginx-admission-patch-jwlsc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-649000 describe pod busybox ingress-nginx-admission-create-whlw7 ingress-nginx-admission-patch-jwlsc: exit status 1 (41.823291ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-649000/192.168.105.2
	Start Time:       Fri, 20 Sep 2024 09:47:37 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xrjwh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xrjwh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to addons-649000
	  Normal   Pulling    7m54s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m54s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m54s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m9s (x20 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-whlw7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jwlsc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-649000 describe pod busybox ingress-nginx-admission-create-whlw7 ingress-nginx-admission-patch-jwlsc: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.26s)

                                                
                                    
x
+
TestCertOptions (10.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-488000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-488000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.908378375s)

                                                
                                                
-- stdout --
	* [cert-options-488000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-488000" primary control-plane node in "cert-options-488000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-488000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-488000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-488000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-488000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.137125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-488000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-488000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-488000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-488000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-488000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.079875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-488000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-488000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-488000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-20 10:23:43.653439 -0700 PDT m=+2425.951154668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-488000 -n cert-options-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-488000 -n cert-options-488000: exit status 7 (30.471208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-488000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-488000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-488000
--- FAIL: TestCertOptions (10.18s)

                                                
                                    
x
+
TestCertExpiration (195.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.976767333s)

                                                
                                                
-- stdout --
	* [cert-expiration-355000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-355000" primary control-plane node in "cert-expiration-355000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-355000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-355000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.1928915s)

                                                
                                                
-- stdout --
	* [cert-expiration-355000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-355000" primary control-plane node in "cert-expiration-355000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-355000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-355000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-355000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-355000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-355000" primary control-plane node in "cert-expiration-355000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-355000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-355000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-355000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-20 10:26:43.666183 -0700 PDT m=+2605.968897543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-355000 -n cert-expiration-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-355000 -n cert-expiration-355000: exit status 7 (58.914459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-355000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-355000
--- FAIL: TestCertExpiration (195.32s)

                                                
                                    
x
+
TestDockerFlags (10.08s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-076000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-076000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.842268958s)

                                                
                                                
-- stdout --
	* [docker-flags-076000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-076000" primary control-plane node in "docker-flags-076000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-076000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:23:23.531865    4158 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:23:23.532004    4158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:23:23.532007    4158 out.go:358] Setting ErrFile to fd 2...
	I0920 10:23:23.532009    4158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:23:23.532135    4158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:23:23.533187    4158 out.go:352] Setting JSON to false
	I0920 10:23:23.549414    4158 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3166,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:23:23.549492    4158 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:23:23.556189    4158 out.go:177] * [docker-flags-076000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:23:23.564152    4158 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:23:23.564221    4158 notify.go:220] Checking for updates...
	I0920 10:23:23.572143    4158 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:23:23.575136    4158 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:23:23.578098    4158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:23:23.581143    4158 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:23:23.584109    4158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:23:23.587497    4158 config.go:182] Loaded profile config "force-systemd-flag-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:23:23.587566    4158 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:23:23.587619    4158 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:23:23.592116    4158 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:23:23.599113    4158 start.go:297] selected driver: qemu2
	I0920 10:23:23.599122    4158 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:23:23.599128    4158 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:23:23.601529    4158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:23:23.605111    4158 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:23:23.608220    4158 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0920 10:23:23.608242    4158 cni.go:84] Creating CNI manager for ""
	I0920 10:23:23.608268    4158 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:23:23.608273    4158 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:23:23.608301    4158 start.go:340] cluster config:
	{Name:docker-flags-076000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-076000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:23:23.611901    4158 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:23:23.619138    4158 out.go:177] * Starting "docker-flags-076000" primary control-plane node in "docker-flags-076000" cluster
	I0920 10:23:23.622970    4158 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:23:23.622988    4158 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:23:23.622998    4158 cache.go:56] Caching tarball of preloaded images
	I0920 10:23:23.623060    4158 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:23:23.623066    4158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:23:23.623149    4158 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/docker-flags-076000/config.json ...
	I0920 10:23:23.623161    4158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/docker-flags-076000/config.json: {Name:mk02e857d72974dea2c31277f860ad77aff1ec11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:23:23.623408    4158 start.go:360] acquireMachinesLock for docker-flags-076000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:23:23.623445    4158 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "docker-flags-076000"
	I0920 10:23:23.623458    4158 start.go:93] Provisioning new machine with config: &{Name:docker-flags-076000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-076000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:23:23.623485    4158 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:23:23.631976    4158 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:23:23.650846    4158 start.go:159] libmachine.API.Create for "docker-flags-076000" (driver="qemu2")
	I0920 10:23:23.650877    4158 client.go:168] LocalClient.Create starting
	I0920 10:23:23.650950    4158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:23:23.650984    4158 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:23.650994    4158 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:23.651033    4158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:23:23.651061    4158 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:23.651069    4158 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:23.651503    4158 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:23:23.814242    4158 main.go:141] libmachine: Creating SSH key...
	I0920 10:23:23.885755    4158 main.go:141] libmachine: Creating Disk image...
	I0920 10:23:23.885761    4158 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:23:23.885948    4158 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2
	I0920 10:23:23.895050    4158 main.go:141] libmachine: STDOUT: 
	I0920 10:23:23.895072    4158 main.go:141] libmachine: STDERR: 
	I0920 10:23:23.895119    4158 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2 +20000M
	I0920 10:23:23.902844    4158 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:23:23.902857    4158 main.go:141] libmachine: STDERR: 
	I0920 10:23:23.902875    4158 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2
	I0920 10:23:23.902880    4158 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:23:23.902890    4158 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:23:23.902915    4158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:75:8e:96:2d:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2
	I0920 10:23:23.904510    4158 main.go:141] libmachine: STDOUT: 
	I0920 10:23:23.904523    4158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:23:23.904539    4158 client.go:171] duration metric: took 253.663959ms to LocalClient.Create
	I0920 10:23:25.906645    4158 start.go:128] duration metric: took 2.283202917s to createHost
	I0920 10:23:25.906697    4158 start.go:83] releasing machines lock for "docker-flags-076000", held for 2.283306125s
	W0920 10:23:25.906767    4158 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:25.925917    4158 out.go:177] * Deleting "docker-flags-076000" in qemu2 ...
	W0920 10:23:25.958451    4158 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:25.958472    4158 start.go:729] Will try again in 5 seconds ...
	I0920 10:23:30.960549    4158 start.go:360] acquireMachinesLock for docker-flags-076000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:23:30.960903    4158 start.go:364] duration metric: took 256.208µs to acquireMachinesLock for "docker-flags-076000"
	I0920 10:23:30.961000    4158 start.go:93] Provisioning new machine with config: &{Name:docker-flags-076000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-076000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:23:30.961204    4158 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:23:30.969843    4158 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:23:31.011868    4158 start.go:159] libmachine.API.Create for "docker-flags-076000" (driver="qemu2")
	I0920 10:23:31.011915    4158 client.go:168] LocalClient.Create starting
	I0920 10:23:31.012063    4158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:23:31.012125    4158 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:31.012142    4158 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:31.012205    4158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:23:31.012244    4158 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:31.012255    4158 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:31.013080    4158 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:23:31.193128    4158 main.go:141] libmachine: Creating SSH key...
	I0920 10:23:31.275647    4158 main.go:141] libmachine: Creating Disk image...
	I0920 10:23:31.275653    4158 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:23:31.275845    4158 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2
	I0920 10:23:31.284976    4158 main.go:141] libmachine: STDOUT: 
	I0920 10:23:31.284998    4158 main.go:141] libmachine: STDERR: 
	I0920 10:23:31.285060    4158 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2 +20000M
	I0920 10:23:31.293013    4158 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:23:31.293032    4158 main.go:141] libmachine: STDERR: 
	I0920 10:23:31.293045    4158 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2
	I0920 10:23:31.293050    4158 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:23:31.293061    4158 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:23:31.293111    4158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ba:87:2f:82:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/docker-flags-076000/disk.qcow2
	I0920 10:23:31.294725    4158 main.go:141] libmachine: STDOUT: 
	I0920 10:23:31.294740    4158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:23:31.294751    4158 client.go:171] duration metric: took 282.839667ms to LocalClient.Create
	I0920 10:23:33.296877    4158 start.go:128] duration metric: took 2.33571225s to createHost
	I0920 10:23:33.296933    4158 start.go:83] releasing machines lock for "docker-flags-076000", held for 2.33606825s
	W0920 10:23:33.297378    4158 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-076000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-076000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:33.312833    4158 out.go:201] 
	W0920 10:23:33.316146    4158 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:23:33.316195    4158 out.go:270] * 
	* 
	W0920 10:23:33.318778    4158 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:23:33.332995    4158 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-076000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-076000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-076000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.773708ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-076000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-076000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-076000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-076000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-076000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-076000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-076000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-076000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-076000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.666833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-076000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-076000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-076000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-076000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-076000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-076000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-20 10:23:33.474417 -0700 PDT m=+2415.771850084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-076000 -n docker-flags-076000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-076000 -n docker-flags-076000: exit status 7 (30.026292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-076000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-076000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-076000
--- FAIL: TestDockerFlags (10.08s)

                                                
                                    
x
+
TestForceSystemdFlag (10.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-173000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
E0920 10:23:18.723445    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-173000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.010425375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-173000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-173000" primary control-plane node in "force-systemd-flag-173000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-173000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:23:18.319708    4137 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:23:18.319831    4137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:23:18.319835    4137 out.go:358] Setting ErrFile to fd 2...
	I0920 10:23:18.319837    4137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:23:18.319966    4137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:23:18.320998    4137 out.go:352] Setting JSON to false
	I0920 10:23:18.337318    4137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3161,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:23:18.337383    4137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:23:18.344034    4137 out.go:177] * [force-systemd-flag-173000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:23:18.364059    4137 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:23:18.364095    4137 notify.go:220] Checking for updates...
	I0920 10:23:18.376928    4137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:23:18.381003    4137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:23:18.384015    4137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:23:18.386988    4137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:23:18.389984    4137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:23:18.393371    4137 config.go:182] Loaded profile config "force-systemd-env-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:23:18.393449    4137 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:23:18.393502    4137 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:23:18.397957    4137 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:23:18.405035    4137 start.go:297] selected driver: qemu2
	I0920 10:23:18.405040    4137 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:23:18.405047    4137 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:23:18.407427    4137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:23:18.410063    4137 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:23:18.413102    4137 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:23:18.413121    4137 cni.go:84] Creating CNI manager for ""
	I0920 10:23:18.413147    4137 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:23:18.413159    4137 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:23:18.413189    4137 start.go:340] cluster config:
	{Name:force-systemd-flag-173000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:23:18.417460    4137 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:23:18.424990    4137 out.go:177] * Starting "force-systemd-flag-173000" primary control-plane node in "force-systemd-flag-173000" cluster
	I0920 10:23:18.429083    4137 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:23:18.429097    4137 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:23:18.429107    4137 cache.go:56] Caching tarball of preloaded images
	I0920 10:23:18.429169    4137 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:23:18.429176    4137 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:23:18.429237    4137 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/force-systemd-flag-173000/config.json ...
	I0920 10:23:18.429249    4137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/force-systemd-flag-173000/config.json: {Name:mk59d8721dba5adee9953aa47fd645b6baa89ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:23:18.429499    4137 start.go:360] acquireMachinesLock for force-systemd-flag-173000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:23:18.429543    4137 start.go:364] duration metric: took 32.208µs to acquireMachinesLock for "force-systemd-flag-173000"
	I0920 10:23:18.429559    4137 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:23:18.429586    4137 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:23:18.437980    4137 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:23:18.458490    4137 start.go:159] libmachine.API.Create for "force-systemd-flag-173000" (driver="qemu2")
	I0920 10:23:18.458524    4137 client.go:168] LocalClient.Create starting
	I0920 10:23:18.458593    4137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:23:18.458628    4137 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:18.458637    4137 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:18.458678    4137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:23:18.458705    4137 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:18.458715    4137 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:18.459169    4137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:23:18.623761    4137 main.go:141] libmachine: Creating SSH key...
	I0920 10:23:18.720043    4137 main.go:141] libmachine: Creating Disk image...
	I0920 10:23:18.720049    4137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:23:18.720244    4137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I0920 10:23:18.729389    4137 main.go:141] libmachine: STDOUT: 
	I0920 10:23:18.729411    4137 main.go:141] libmachine: STDERR: 
	I0920 10:23:18.729471    4137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2 +20000M
	I0920 10:23:18.737367    4137 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:23:18.737382    4137 main.go:141] libmachine: STDERR: 
	I0920 10:23:18.737396    4137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I0920 10:23:18.737404    4137 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:23:18.737417    4137 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:23:18.737441    4137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:7b:83:2c:3a:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I0920 10:23:18.738981    4137 main.go:141] libmachine: STDOUT: 
	I0920 10:23:18.738995    4137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:23:18.739014    4137 client.go:171] duration metric: took 280.491375ms to LocalClient.Create
	I0920 10:23:20.741171    4137 start.go:128] duration metric: took 2.311627792s to createHost
	I0920 10:23:20.741229    4137 start.go:83] releasing machines lock for "force-systemd-flag-173000", held for 2.311739416s
	W0920 10:23:20.741300    4137 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:20.759518    4137 out.go:177] * Deleting "force-systemd-flag-173000" in qemu2 ...
	W0920 10:23:20.791698    4137 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:20.791722    4137 start.go:729] Will try again in 5 seconds ...
	I0920 10:23:25.793747    4137 start.go:360] acquireMachinesLock for force-systemd-flag-173000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:23:25.906844    4137 start.go:364] duration metric: took 112.94175ms to acquireMachinesLock for "force-systemd-flag-173000"
	I0920 10:23:25.907030    4137 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:23:25.907319    4137 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:23:25.921024    4137 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:23:25.968747    4137 start.go:159] libmachine.API.Create for "force-systemd-flag-173000" (driver="qemu2")
	I0920 10:23:25.968804    4137 client.go:168] LocalClient.Create starting
	I0920 10:23:25.968926    4137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:23:25.968989    4137 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:25.969007    4137 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:25.969066    4137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:23:25.969113    4137 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:25.969126    4137 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:25.969758    4137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:23:26.166047    4137 main.go:141] libmachine: Creating SSH key...
	I0920 10:23:26.222323    4137 main.go:141] libmachine: Creating Disk image...
	I0920 10:23:26.222329    4137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:23:26.222489    4137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I0920 10:23:26.231624    4137 main.go:141] libmachine: STDOUT: 
	I0920 10:23:26.231645    4137 main.go:141] libmachine: STDERR: 
	I0920 10:23:26.231706    4137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2 +20000M
	I0920 10:23:26.239489    4137 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:23:26.239503    4137 main.go:141] libmachine: STDERR: 
	I0920 10:23:26.239516    4137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I0920 10:23:26.239527    4137 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:23:26.239539    4137 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:23:26.239566    4137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f2:16:be:2c:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I0920 10:23:26.241136    4137 main.go:141] libmachine: STDOUT: 
	I0920 10:23:26.241148    4137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:23:26.241161    4137 client.go:171] duration metric: took 272.35925ms to LocalClient.Create
	I0920 10:23:28.243281    4137 start.go:128] duration metric: took 2.335997417s to createHost
	I0920 10:23:28.243338    4137 start.go:83] releasing machines lock for "force-systemd-flag-173000", held for 2.33651425s
	W0920 10:23:28.243713    4137 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-173000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-173000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:28.257410    4137 out.go:201] 
	W0920 10:23:28.270594    4137 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:23:28.270622    4137 out.go:270] * 
	* 
	W0920 10:23:28.273114    4137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:23:28.287456    4137 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-173000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-173000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-173000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.9575ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-173000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-173000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-173000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-20 10:23:28.384096 -0700 PDT m=+2410.681387793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-173000 -n force-systemd-flag-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-173000 -n force-systemd-flag-173000: exit status 7 (34.662958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-173000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-173000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-173000
--- FAIL: TestForceSystemdFlag (10.21s)

                                                
                                    
x
+
TestForceSystemdEnv (12.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-928000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0920 10:23:13.089965    1679 install.go:79] stdout: 
W0920 10:23:13.090437    1679 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0920 10:23:13.090458    1679 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit]
I0920 10:23:13.104551    1679 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit]
I0920 10:23:13.114418    1679 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit]
I0920 10:23:13.122940    1679 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit]
I0920 10:23:13.139229    1679 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:23:13.139340    1679 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0920 10:23:14.926303    1679 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0920 10:23:14.926327    1679 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0920 10:23:14.926371    1679 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0920 10:23:14.926406    1679 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit
I0920 10:23:15.351806    1679 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10684ad40 0x10684ad40 0x10684ad40 0x10684ad40 0x10684ad40 0x10684ad40 0x10684ad40] Decompressors:map[bz2:0x1400071d790 gz:0x1400071d798 tar:0x1400071d740 tar.bz2:0x1400071d750 tar.gz:0x1400071d760 tar.xz:0x1400071d770 tar.zst:0x1400071d780 tbz2:0x1400071d750 tgz:0x1400071d760 txz:0x1400071d770 tzst:0x1400071d780 xz:0x1400071d7a0 zip:0x1400071d7b0 zst:0x1400071d7a8] Getters:map[file:0x14000148b60 http:0x14000a955e0 https:0x14000a95630] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0920 10:23:15.351945    1679 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit
I0920 10:23:18.247031    1679 install.go:79] stdout: 
W0920 10:23:18.247199    1679 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0920 10:23:18.247247    1679 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit]
I0920 10:23:18.260895    1679 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit]
I0920 10:23:18.272356    1679 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit]
I0920 10:23:18.280878    1679 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-928000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.240967375s)

                                                
                                                
-- stdout --
	* [force-systemd-env-928000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-928000" primary control-plane node in "force-systemd-env-928000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-928000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:23:11.100319    4105 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:23:11.100504    4105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:23:11.100507    4105 out.go:358] Setting ErrFile to fd 2...
	I0920 10:23:11.100510    4105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:23:11.100638    4105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:23:11.101616    4105 out.go:352] Setting JSON to false
	I0920 10:23:11.117865    4105 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3154,"bootTime":1726849837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:23:11.117937    4105 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:23:11.124164    4105 out.go:177] * [force-systemd-env-928000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:23:11.132079    4105 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:23:11.132157    4105 notify.go:220] Checking for updates...
	I0920 10:23:11.139050    4105 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:23:11.142033    4105 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:23:11.145056    4105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:23:11.148014    4105 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:23:11.151011    4105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0920 10:23:11.154419    4105 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:23:11.154476    4105 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:23:11.157997    4105 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:23:11.164998    4105 start.go:297] selected driver: qemu2
	I0920 10:23:11.165004    4105 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:23:11.165011    4105 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:23:11.167415    4105 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:23:11.169015    4105 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:23:11.172288    4105 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:23:11.172315    4105 cni.go:84] Creating CNI manager for ""
	I0920 10:23:11.172347    4105 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:23:11.172352    4105 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:23:11.172387    4105 start.go:340] cluster config:
	{Name:force-systemd-env-928000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:23:11.175897    4105 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:23:11.183026    4105 out.go:177] * Starting "force-systemd-env-928000" primary control-plane node in "force-systemd-env-928000" cluster
	I0920 10:23:11.187035    4105 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:23:11.187049    4105 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:23:11.187058    4105 cache.go:56] Caching tarball of preloaded images
	I0920 10:23:11.187115    4105 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:23:11.187121    4105 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:23:11.187177    4105 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/force-systemd-env-928000/config.json ...
	I0920 10:23:11.187188    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/force-systemd-env-928000/config.json: {Name:mk96a050b96e9b4ce78fe565f866a4dca9bcdac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:23:11.187415    4105 start.go:360] acquireMachinesLock for force-systemd-env-928000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:23:11.187453    4105 start.go:364] duration metric: took 28.959µs to acquireMachinesLock for "force-systemd-env-928000"
	I0920 10:23:11.187467    4105 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:23:11.187493    4105 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:23:11.204007    4105 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:23:11.222117    4105 start.go:159] libmachine.API.Create for "force-systemd-env-928000" (driver="qemu2")
	I0920 10:23:11.222143    4105 client.go:168] LocalClient.Create starting
	I0920 10:23:11.222209    4105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:23:11.222237    4105 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:11.222246    4105 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:11.222283    4105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:23:11.222305    4105 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:11.222313    4105 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:11.222651    4105 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:23:11.437322    4105 main.go:141] libmachine: Creating SSH key...
	I0920 10:23:11.495761    4105 main.go:141] libmachine: Creating Disk image...
	I0920 10:23:11.495766    4105 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:23:11.495962    4105 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2
	I0920 10:23:11.505016    4105 main.go:141] libmachine: STDOUT: 
	I0920 10:23:11.505039    4105 main.go:141] libmachine: STDERR: 
	I0920 10:23:11.505103    4105 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2 +20000M
	I0920 10:23:11.513168    4105 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:23:11.513183    4105 main.go:141] libmachine: STDERR: 
	I0920 10:23:11.513204    4105 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2
	I0920 10:23:11.513209    4105 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:23:11.513221    4105 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:23:11.513246    4105 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:be:63:34:b5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2
	I0920 10:23:11.514857    4105 main.go:141] libmachine: STDOUT: 
	I0920 10:23:11.514869    4105 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:23:11.514888    4105 client.go:171] duration metric: took 292.748917ms to LocalClient.Create
	I0920 10:23:13.516968    4105 start.go:128] duration metric: took 2.329522875s to createHost
	I0920 10:23:13.517048    4105 start.go:83] releasing machines lock for "force-systemd-env-928000", held for 2.329626792s
	W0920 10:23:13.517131    4105 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:13.530249    4105 out.go:177] * Deleting "force-systemd-env-928000" in qemu2 ...
	W0920 10:23:13.556482    4105 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:13.556506    4105 start.go:729] Will try again in 5 seconds ...
	I0920 10:23:18.558362    4105 start.go:360] acquireMachinesLock for force-systemd-env-928000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:23:20.741401    4105 start.go:364] duration metric: took 2.183054583s to acquireMachinesLock for "force-systemd-env-928000"
	I0920 10:23:20.741549    4105 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:23:20.741838    4105 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:23:20.751883    4105 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:23:20.801547    4105 start.go:159] libmachine.API.Create for "force-systemd-env-928000" (driver="qemu2")
	I0920 10:23:20.801603    4105 client.go:168] LocalClient.Create starting
	I0920 10:23:20.801748    4105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:23:20.801815    4105 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:20.801834    4105 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:20.801900    4105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:23:20.801947    4105 main.go:141] libmachine: Decoding PEM data...
	I0920 10:23:20.801960    4105 main.go:141] libmachine: Parsing certificate...
	I0920 10:23:20.802528    4105 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:23:21.081660    4105 main.go:141] libmachine: Creating SSH key...
	I0920 10:23:21.230919    4105 main.go:141] libmachine: Creating Disk image...
	I0920 10:23:21.230928    4105 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:23:21.231140    4105 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2
	I0920 10:23:21.240673    4105 main.go:141] libmachine: STDOUT: 
	I0920 10:23:21.240690    4105 main.go:141] libmachine: STDERR: 
	I0920 10:23:21.240761    4105 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2 +20000M
	I0920 10:23:21.248635    4105 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:23:21.248658    4105 main.go:141] libmachine: STDERR: 
	I0920 10:23:21.248671    4105 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2
	I0920 10:23:21.248675    4105 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:23:21.248686    4105 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:23:21.248723    4105 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:32:11:cd:e1:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/force-systemd-env-928000/disk.qcow2
	I0920 10:23:21.250372    4105 main.go:141] libmachine: STDOUT: 
	I0920 10:23:21.250390    4105 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:23:21.250404    4105 client.go:171] duration metric: took 448.8065ms to LocalClient.Create
	I0920 10:23:23.252522    4105 start.go:128] duration metric: took 2.510716125s to createHost
	I0920 10:23:23.252582    4105 start.go:83] releasing machines lock for "force-systemd-env-928000", held for 2.511212166s
	W0920 10:23:23.253025    4105 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-928000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-928000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:23:23.275698    4105 out.go:201] 
	W0920 10:23:23.284750    4105 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:23:23.284788    4105 out.go:270] * 
	* 
	W0920 10:23:23.287656    4105 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:23:23.296587    4105 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-928000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-928000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-928000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.408875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-928000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-928000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-20 10:23:23.390194 -0700 PDT m=+2405.687347626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-928000 -n force-systemd-env-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-928000 -n force-systemd-env-928000: exit status 7 (34.944792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-928000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-928000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-928000
--- FAIL: TestForceSystemdEnv (12.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-862000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-862000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-q5dg2" [ca1ed39b-6e44-4691-8f4e-89fd2ea0e454] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-q5dg2" [ca1ed39b-6e44-4691-8f4e-89fd2ea0e454] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004777833s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32439
functional_test.go:1661: error fetching http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
I0920 10:02:18.089259    1679 retry.go:31] will retry after 858.586169ms: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
I0920 10:02:18.950613    1679 retry.go:31] will retry after 1.153945343s: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
I0920 10:02:20.108318    1679 retry.go:31] will retry after 2.293890427s: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
I0920 10:02:22.404378    1679 retry.go:31] will retry after 2.460745972s: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
I0920 10:02:24.868848    1679 retry.go:31] will retry after 4.941545366s: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
I0920 10:02:29.813213    1679 retry.go:31] will retry after 4.310835775s: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
I0920 10:02:34.126981    1679 retry.go:31] will retry after 6.798876111s: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32439: Get "http://192.168.105.4:32439": dial tcp 192.168.105.4:32439: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-862000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-q5dg2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-862000/192.168.105.4
Start Time:       Fri, 20 Sep 2024 10:02:11 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://7f5ca83a5fe0c4a83a942af9ab87ba3bfdbb92208cde2fd92df1cae9ea982214
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 20 Sep 2024 10:02:27 -0700
Finished:     Fri, 20 Sep 2024 10:02:27 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgsdq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cgsdq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  29s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-q5dg2 to functional-862000
Normal   Pulled     13s (x3 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    13s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    13s (x3 over 29s)  kubelet            Started container echoserver-arm
Warning  BackOff    1s (x4 over 27s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-q5dg2_default(ca1ed39b-6e44-4691-8f4e-89fd2ea0e454)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-862000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-862000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.204.49
IPs:                      10.110.204.49
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32439/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-862000 -n functional-862000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                      |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-862000 image save                                                                                   | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT | 20 Sep 24 10:01 PDT |
	|         | kicbase/echo-server:functional-862000                                                                          |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| image   | functional-862000 image rm                                                                                     | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT | 20 Sep 24 10:01 PDT |
	|         | kicbase/echo-server:functional-862000                                                                          |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| image   | functional-862000 image ls                                                                                     | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT | 20 Sep 24 10:01 PDT |
	| image   | functional-862000 image load                                                                                   | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT | 20 Sep 24 10:01 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| image   | functional-862000 image ls                                                                                     | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT | 20 Sep 24 10:01 PDT |
	| image   | functional-862000 image save --daemon                                                                          | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT | 20 Sep 24 10:01 PDT |
	|         | kicbase/echo-server:functional-862000                                                                          |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-862000 ssh echo                                                                                     | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT | 20 Sep 24 10:01 PDT |
	|         | hello                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-862000 ssh cat                                                                                      | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT | 20 Sep 24 10:01 PDT |
	|         | /etc/hostname                                                                                                  |                   |         |         |                     |                     |
	| tunnel  | functional-862000 tunnel                                                                                       | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| tunnel  | functional-862000 tunnel                                                                                       | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:01 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| tunnel  | functional-862000 tunnel                                                                                       | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| service | functional-862000 service list                                                                                 | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	| service | functional-862000 service list                                                                                 | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | -o json                                                                                                        |                   |         |         |                     |                     |
	| service | functional-862000 service                                                                                      | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | --namespace=default --https                                                                                    |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                               |                   |         |         |                     |                     |
	| service | functional-862000                                                                                              | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | service hello-node --url                                                                                       |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                               |                   |         |         |                     |                     |
	| service | functional-862000 service                                                                                      | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | hello-node --url                                                                                               |                   |         |         |                     |                     |
	| addons  | functional-862000 addons list                                                                                  | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	| addons  | functional-862000 addons list                                                                                  | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | -o json                                                                                                        |                   |         |         |                     |                     |
	| service | functional-862000 service                                                                                      | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | hello-node-connect --url                                                                                       |                   |         |         |                     |                     |
	| mount   | -p functional-862000                                                                                           | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port659608981/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-862000 ssh findmnt                                                                                  | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-862000 ssh findmnt                                                                                  | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-862000 ssh findmnt                                                                                  | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | -T /mount-9p | grep 9p                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-862000 ssh -- ls                                                                                    | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | -la /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-862000 ssh cat                                                                                      | functional-862000 | jenkins | v1.34.0 | 20 Sep 24 10:02 PDT | 20 Sep 24 10:02 PDT |
	|         | /mount-9p/test-1726851754483266000                                                                             |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:01:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:01:13.370905    2655 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:01:13.371013    2655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:01:13.371015    2655 out.go:358] Setting ErrFile to fd 2...
	I0920 10:01:13.371017    2655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:01:13.371140    2655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:01:13.372238    2655 out.go:352] Setting JSON to false
	I0920 10:01:13.388791    2655 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1836,"bootTime":1726849837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:01:13.388883    2655 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:01:13.393543    2655 out.go:177] * [functional-862000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:01:13.402532    2655 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:01:13.402624    2655 notify.go:220] Checking for updates...
	I0920 10:01:13.408488    2655 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:01:13.411533    2655 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:01:13.414499    2655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:01:13.417486    2655 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:01:13.420496    2655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:01:13.423738    2655 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:01:13.423797    2655 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:01:13.428435    2655 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:01:13.435441    2655 start.go:297] selected driver: qemu2
	I0920 10:01:13.435446    2655 start.go:901] validating driver "qemu2" against &{Name:functional-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:01:13.435500    2655 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:01:13.437604    2655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:01:13.437624    2655 cni.go:84] Creating CNI manager for ""
	I0920 10:01:13.437653    2655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:01:13.437703    2655 start.go:340] cluster config:
	{Name:functional-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-862000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:01:13.440940    2655 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:01:13.449426    2655 out.go:177] * Starting "functional-862000" primary control-plane node in "functional-862000" cluster
	I0920 10:01:13.453384    2655 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:01:13.453394    2655 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:01:13.453398    2655 cache.go:56] Caching tarball of preloaded images
	I0920 10:01:13.453459    2655 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:01:13.453463    2655 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:01:13.453516    2655 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/config.json ...
	I0920 10:01:13.453969    2655 start.go:360] acquireMachinesLock for functional-862000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:01:13.454001    2655 start.go:364] duration metric: took 27.959µs to acquireMachinesLock for "functional-862000"
	I0920 10:01:13.454008    2655 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:01:13.454011    2655 fix.go:54] fixHost starting: 
	I0920 10:01:13.454555    2655 fix.go:112] recreateIfNeeded on functional-862000: state=Running err=<nil>
	W0920 10:01:13.454562    2655 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:01:13.463352    2655 out.go:177] * Updating the running qemu2 "functional-862000" VM ...
	I0920 10:01:13.467406    2655 machine.go:93] provisionDockerMachine start ...
	I0920 10:01:13.467442    2655 main.go:141] libmachine: Using SSH client type: native
	I0920 10:01:13.467568    2655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aedc00] 0x104af0440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0920 10:01:13.467571    2655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:01:13.520972    2655 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-862000
	
	I0920 10:01:13.520983    2655 buildroot.go:166] provisioning hostname "functional-862000"
	I0920 10:01:13.521041    2655 main.go:141] libmachine: Using SSH client type: native
	I0920 10:01:13.521153    2655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aedc00] 0x104af0440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0920 10:01:13.521157    2655 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-862000 && echo "functional-862000" | sudo tee /etc/hostname
	I0920 10:01:13.579029    2655 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-862000
	
	I0920 10:01:13.579083    2655 main.go:141] libmachine: Using SSH client type: native
	I0920 10:01:13.579195    2655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aedc00] 0x104af0440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0920 10:01:13.579202    2655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-862000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-862000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-862000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:01:13.631700    2655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:01:13.631707    2655 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19672-1143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19672-1143/.minikube}
	I0920 10:01:13.631717    2655 buildroot.go:174] setting up certificates
	I0920 10:01:13.631720    2655 provision.go:84] configureAuth start
	I0920 10:01:13.631723    2655 provision.go:143] copyHostCerts
	I0920 10:01:13.631785    2655 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem, removing ...
	I0920 10:01:13.631797    2655 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem
	I0920 10:01:13.631926    2655 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem (1679 bytes)
	I0920 10:01:13.632108    2655 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem, removing ...
	I0920 10:01:13.632110    2655 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem
	I0920 10:01:13.632303    2655 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem (1078 bytes)
	I0920 10:01:13.632420    2655 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem, removing ...
	I0920 10:01:13.632422    2655 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem
	I0920 10:01:13.632486    2655 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem (1123 bytes)
	I0920 10:01:13.632568    2655 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem org=jenkins.functional-862000 san=[127.0.0.1 192.168.105.4 functional-862000 localhost minikube]
	I0920 10:01:13.776997    2655 provision.go:177] copyRemoteCerts
	I0920 10:01:13.777054    2655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:01:13.777063    2655 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
	I0920 10:01:13.807289    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:01:13.815472    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 10:01:13.824337    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 10:01:13.832834    2655 provision.go:87] duration metric: took 201.310583ms to configureAuth
	I0920 10:01:13.832840    2655 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:01:13.832963    2655 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:01:13.833001    2655 main.go:141] libmachine: Using SSH client type: native
	I0920 10:01:13.833094    2655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aedc00] 0x104af0440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0920 10:01:13.833097    2655 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:01:13.889733    2655 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:01:13.889741    2655 buildroot.go:70] root file system type: tmpfs
	I0920 10:01:13.889788    2655 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:01:13.889859    2655 main.go:141] libmachine: Using SSH client type: native
	I0920 10:01:13.889976    2655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aedc00] 0x104af0440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0920 10:01:13.890006    2655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:01:13.947030    2655 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:01:13.947095    2655 main.go:141] libmachine: Using SSH client type: native
	I0920 10:01:13.947211    2655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aedc00] 0x104af0440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0920 10:01:13.947218    2655 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:01:14.003409    2655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:01:14.003415    2655 machine.go:96] duration metric: took 536.555875ms to provisionDockerMachine
	I0920 10:01:14.003420    2655 start.go:293] postStartSetup for "functional-862000" (driver="qemu2")
	I0920 10:01:14.003425    2655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:01:14.003471    2655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:01:14.003478    2655 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
	I0920 10:01:14.032384    2655 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:01:14.033870    2655 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 10:01:14.033874    2655 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/addons for local assets ...
	I0920 10:01:14.033942    2655 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/files for local assets ...
	I0920 10:01:14.034048    2655 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0920 10:01:14.034166    2655 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/test/nested/copy/1679/hosts -> hosts in /etc/test/nested/copy/1679
	I0920 10:01:14.034203    2655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1679
	I0920 10:01:14.037558    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0920 10:01:14.046096    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/test/nested/copy/1679/hosts --> /etc/test/nested/copy/1679/hosts (40 bytes)
	I0920 10:01:14.054939    2655 start.go:296] duration metric: took 51.565708ms for postStartSetup
	I0920 10:01:14.054950    2655 fix.go:56] duration metric: took 601.555542ms for fixHost
	I0920 10:01:14.054990    2655 main.go:141] libmachine: Using SSH client type: native
	I0920 10:01:14.055092    2655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aedc00] 0x104af0440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0920 10:01:14.055095    2655 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:01:14.109524    2655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726851674.197977397
	
	I0920 10:01:14.109528    2655 fix.go:216] guest clock: 1726851674.197977397
	I0920 10:01:14.109531    2655 fix.go:229] Guest: 2024-09-20 10:01:14.197977397 -0700 PDT Remote: 2024-09-20 10:01:14.054951 -0700 PDT m=+0.703604501 (delta=143.026397ms)
	I0920 10:01:14.109545    2655 fix.go:200] guest clock delta is within tolerance: 143.026397ms
	I0920 10:01:14.109547    2655 start.go:83] releasing machines lock for "functional-862000", held for 656.213ms
	I0920 10:01:14.109808    2655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:01:14.109808    2655 ssh_runner.go:195] Run: cat /version.json
	I0920 10:01:14.109816    2655 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
	I0920 10:01:14.109825    2655 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
	I0920 10:01:14.137410    2655 ssh_runner.go:195] Run: systemctl --version
	I0920 10:01:14.179205    2655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:01:14.181068    2655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:01:14.181097    2655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 10:01:14.184490    2655 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 10:01:14.184495    2655 start.go:495] detecting cgroup driver to use...
	I0920 10:01:14.184562    2655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:01:14.191276    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 10:01:14.195661    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:01:14.199772    2655 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:01:14.199801    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:01:14.203803    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:01:14.207914    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:01:14.211860    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:01:14.215943    2655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:01:14.219944    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:01:14.223891    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:01:14.228007    2655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:01:14.232016    2655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:01:14.235938    2655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:01:14.239844    2655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:01:14.330502    2655 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:01:14.341677    2655 start.go:495] detecting cgroup driver to use...
	I0920 10:01:14.341735    2655 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:01:14.348753    2655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:01:14.358562    2655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:01:14.365955    2655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:01:14.371074    2655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:01:14.376453    2655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:01:14.383764    2655 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:01:14.385262    2655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:01:14.388495    2655 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 10:01:14.394086    2655 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:01:14.494019    2655 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:01:14.602063    2655 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:01:14.602117    2655 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:01:14.608094    2655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:01:14.716474    2655 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:01:27.051797    2655 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.343653458s)
	I0920 10:01:27.051873    2655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:01:27.057786    2655 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:01:27.066007    2655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:01:27.071884    2655 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:01:27.142792    2655 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:01:27.230113    2655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:01:27.327218    2655 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:01:27.334127    2655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:01:27.339473    2655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:01:27.419059    2655 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:01:27.448421    2655 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:01:27.448506    2655 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:01:27.451307    2655 start.go:563] Will wait 60s for crictl version
	I0920 10:01:27.451354    2655 ssh_runner.go:195] Run: which crictl
	I0920 10:01:27.452796    2655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:01:27.465457    2655 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 10:01:27.465539    2655 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:01:27.473020    2655 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:01:27.482753    2655 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 10:01:27.482918    2655 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0920 10:01:27.489654    2655 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0920 10:01:27.494691    2655 kubeadm.go:883] updating cluster {Name:functional-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:functional-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 10:01:27.494778    2655 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:01:27.494843    2655 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:01:27.501036    2655 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-862000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0920 10:01:27.501042    2655 docker.go:615] Images already preloaded, skipping extraction
	I0920 10:01:27.501108    2655 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:01:27.506638    2655 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-862000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0920 10:01:27.506644    2655 cache_images.go:84] Images are preloaded, skipping loading
	I0920 10:01:27.506647    2655 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.1 docker true true} ...
	I0920 10:01:27.506712    2655 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-862000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:01:27.506781    2655 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:01:27.522855    2655 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0920 10:01:27.522864    2655 cni.go:84] Creating CNI manager for ""
	I0920 10:01:27.522870    2655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:01:27.522874    2655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:01:27.522883    2655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-862000 NodeName:functional-862000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:01:27.522959    2655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-862000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:01:27.523029    2655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 10:01:27.526942    2655 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:01:27.526974    2655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:01:27.530429    2655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 10:01:27.536679    2655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:01:27.542664    2655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0920 10:01:27.548584    2655 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0920 10:01:27.549946    2655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:01:27.627378    2655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:01:27.633315    2655 certs.go:68] Setting up /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000 for IP: 192.168.105.4
	I0920 10:01:27.633325    2655 certs.go:194] generating shared ca certs ...
	I0920 10:01:27.633333    2655 certs.go:226] acquiring lock for ca certs: {Name:mk7151e0388cf18b174fabc4929e6178a41b4c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:01:27.633496    2655 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key
	I0920 10:01:27.633559    2655 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key
	I0920 10:01:27.633563    2655 certs.go:256] generating profile certs ...
	I0920 10:01:27.633643    2655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.key
	I0920 10:01:27.633697    2655 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/apiserver.key.8e0e742e
	I0920 10:01:27.633750    2655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/proxy-client.key
	I0920 10:01:27.633910    2655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679.pem (1338 bytes)
	W0920 10:01:27.633940    2655 certs.go:480] ignoring /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0920 10:01:27.633944    2655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 10:01:27.633970    2655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:01:27.633993    2655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:01:27.634016    2655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem (1679 bytes)
	I0920 10:01:27.634071    2655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0920 10:01:27.634420    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:01:27.644685    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 10:01:27.653096    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:01:27.661487    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 10:01:27.670273    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 10:01:27.678953    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 10:01:27.687442    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:01:27.695750    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 10:01:27.703753    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0920 10:01:27.711992    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:01:27.719968    2655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0920 10:01:27.728359    2655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:01:27.734437    2655 ssh_runner.go:195] Run: openssl version
	I0920 10:01:27.736598    2655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:01:27.740098    2655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:01:27.741630    2655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:01:27.741653    2655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:01:27.744003    2655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:01:27.747418    2655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0920 10:01:27.751213    2655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0920 10:01:27.752680    2655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 16:59 /usr/share/ca-certificates/1679.pem
	I0920 10:01:27.752703    2655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0920 10:01:27.754642    2655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0920 10:01:27.758181    2655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0920 10:01:27.762071    2655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0920 10:01:27.763582    2655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 16:59 /usr/share/ca-certificates/16792.pem
	I0920 10:01:27.763603    2655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0920 10:01:27.765715    2655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:01:27.769149    2655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:01:27.770650    2655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:01:27.772685    2655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:01:27.774554    2655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:01:27.776557    2655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:01:27.778412    2655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:01:27.780424    2655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:01:27.782472    2655 kubeadm.go:392] StartCluster: {Name:functional-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:functional-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:01:27.782550    2655 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:01:27.788830    2655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:01:27.792206    2655 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:01:27.792209    2655 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:01:27.792240    2655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:01:27.795474    2655 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:01:27.795783    2655 kubeconfig.go:125] found "functional-862000" server: "https://192.168.105.4:8441"
	I0920 10:01:27.796406    2655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:01:27.799614    2655 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0920 10:01:27.799617    2655 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:01:27.799663    2655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:01:27.809442    2655 docker.go:483] Stopping containers: [b173b635884d e25cf87c76e9 6f840adca3b3 fcd7a435b86d b5c8f3b2b164 e49b88adf93d ce8d699b97bf 3d345820b828 520cbf52855b 30b4dfa1d85c f9e2ba7aaa59 3ba386534254 7fbf94f08bfe bb9d3e9cd69a 765308914f1d 321f5f1a27aa f073f6c325b7 7764c1e46682 899119fae2b5 97e154333004 663aa2bd0b7c 4a9712735869 3a8e134f6bd9 63564233a509 da557f2e2199 505301739dd0 e01fd3d80207 2225eeb78cc6 83d2dd5d443e]
	I0920 10:01:27.809505    2655 ssh_runner.go:195] Run: docker stop b173b635884d e25cf87c76e9 6f840adca3b3 fcd7a435b86d b5c8f3b2b164 e49b88adf93d ce8d699b97bf 3d345820b828 520cbf52855b 30b4dfa1d85c f9e2ba7aaa59 3ba386534254 7fbf94f08bfe bb9d3e9cd69a 765308914f1d 321f5f1a27aa f073f6c325b7 7764c1e46682 899119fae2b5 97e154333004 663aa2bd0b7c 4a9712735869 3a8e134f6bd9 63564233a509 da557f2e2199 505301739dd0 e01fd3d80207 2225eeb78cc6 83d2dd5d443e
	I0920 10:01:27.821095    2655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:01:27.925197    2655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:01:27.930984    2655 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 20 16:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 20 17:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 20 16:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Sep 20 17:00 /etc/kubernetes/scheduler.conf
	
	I0920 10:01:27.931019    2655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0920 10:01:27.935599    2655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0920 10:01:27.939755    2655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0920 10:01:27.943777    2655 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:01:27.943805    2655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:01:27.947767    2655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0920 10:01:27.951387    2655 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:01:27.951408    2655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:01:27.955100    2655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:01:27.958633    2655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:01:27.975700    2655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:01:28.597272    2655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:01:28.712090    2655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:01:28.735378    2655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:01:28.764724    2655 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:01:28.764794    2655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:01:29.266635    2655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:01:29.766447    2655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:01:29.771573    2655 api_server.go:72] duration metric: took 1.007253834s to wait for apiserver process to appear ...
	I0920 10:01:29.771578    2655 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:01:29.771587    2655 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0920 10:01:31.400973    2655 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 10:01:31.400983    2655 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 10:01:31.400989    2655 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0920 10:01:31.440690    2655 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 10:01:31.440700    2655 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 10:01:31.772960    2655 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0920 10:01:31.780539    2655 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 10:01:31.780556    2655 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 10:01:32.272746    2655 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0920 10:01:32.275889    2655 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 10:01:32.275898    2655 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 10:01:32.772686    2655 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0920 10:01:32.787165    2655 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0920 10:01:32.803402    2655 api_server.go:141] control plane version: v1.31.1
	I0920 10:01:32.803430    2655 api_server.go:131] duration metric: took 3.032919208s to wait for apiserver health ...
	I0920 10:01:32.803442    2655 cni.go:84] Creating CNI manager for ""
	I0920 10:01:32.803457    2655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:01:32.808870    2655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:01:32.813844    2655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:01:32.823892    2655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:01:32.836651    2655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 10:01:32.845552    2655 system_pods.go:59] 7 kube-system pods found
	I0920 10:01:32.845566    2655 system_pods.go:61] "coredns-7c65d6cfc9-dcqt7" [6f81cdc3-a6be-4d33-82ce-b8b76c1b79cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 10:01:32.845571    2655 system_pods.go:61] "etcd-functional-862000" [ea8f1ef2-779f-4f09-822e-e42e2c0b2bb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 10:01:32.845574    2655 system_pods.go:61] "kube-apiserver-functional-862000" [e7bf1e18-c7ed-46d8-a38b-0308e020bd5b] Pending
	I0920 10:01:32.845579    2655 system_pods.go:61] "kube-controller-manager-functional-862000" [c5a28450-509a-4eff-8c98-bde5f3fb652c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 10:01:32.845582    2655 system_pods.go:61] "kube-proxy-twdlr" [0359972a-9968-4fea-a24b-07be8a873fa3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 10:01:32.845586    2655 system_pods.go:61] "kube-scheduler-functional-862000" [7a9fd08a-2b06-4fd6-afa8-e6df0c68f35a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 10:01:32.845590    2655 system_pods.go:61] "storage-provisioner" [8744e522-77b1-4718-ba24-5d386614ba97] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 10:01:32.845594    2655 system_pods.go:74] duration metric: took 8.940125ms to wait for pod list to return data ...
	I0920 10:01:32.845599    2655 node_conditions.go:102] verifying NodePressure condition ...
	I0920 10:01:32.848190    2655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 10:01:32.848198    2655 node_conditions.go:123] node cpu capacity is 2
	I0920 10:01:32.848205    2655 node_conditions.go:105] duration metric: took 2.603875ms to run NodePressure ...
	I0920 10:01:32.848213    2655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:01:33.075934    2655 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 10:01:33.080074    2655 kubeadm.go:739] kubelet initialised
	I0920 10:01:33.080082    2655 kubeadm.go:740] duration metric: took 4.137375ms waiting for restarted kubelet to initialise ...
	I0920 10:01:33.080088    2655 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:01:33.084433    2655 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dcqt7" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:34.089077    2655 pod_ready.go:93] pod "coredns-7c65d6cfc9-dcqt7" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:34.089084    2655 pod_ready.go:82] duration metric: took 1.004957167s for pod "coredns-7c65d6cfc9-dcqt7" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:34.089090    2655 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:36.103226    2655 pod_ready.go:103] pod "etcd-functional-862000" in "kube-system" namespace has status "Ready":"False"
	I0920 10:01:38.103289    2655 pod_ready.go:103] pod "etcd-functional-862000" in "kube-system" namespace has status "Ready":"False"
	I0920 10:01:40.600225    2655 pod_ready.go:103] pod "etcd-functional-862000" in "kube-system" namespace has status "Ready":"False"
	I0920 10:01:42.601716    2655 pod_ready.go:103] pod "etcd-functional-862000" in "kube-system" namespace has status "Ready":"False"
	I0920 10:01:44.101651    2655 pod_ready.go:93] pod "etcd-functional-862000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:44.101675    2655 pod_ready.go:82] duration metric: took 10.01491s for pod "etcd-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.101691    2655 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.109971    2655 pod_ready.go:93] pod "kube-apiserver-functional-862000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:44.109980    2655 pod_ready.go:82] duration metric: took 8.283375ms for pod "kube-apiserver-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.109989    2655 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.116702    2655 pod_ready.go:93] pod "kube-controller-manager-functional-862000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:44.116715    2655 pod_ready.go:82] duration metric: took 6.718791ms for pod "kube-controller-manager-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.116732    2655 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-twdlr" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.123082    2655 pod_ready.go:93] pod "kube-proxy-twdlr" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:44.123092    2655 pod_ready.go:82] duration metric: took 6.350833ms for pod "kube-proxy-twdlr" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.123101    2655 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.128220    2655 pod_ready.go:93] pod "kube-scheduler-functional-862000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:44.128228    2655 pod_ready.go:82] duration metric: took 5.12125ms for pod "kube-scheduler-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.128236    2655 pod_ready.go:39] duration metric: took 11.050792208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:01:44.128274    2655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:01:44.137576    2655 ops.go:34] apiserver oom_adj: -16
	I0920 10:01:44.137584    2655 kubeadm.go:597] duration metric: took 16.350004375s to restartPrimaryControlPlane
	I0920 10:01:44.137589    2655 kubeadm.go:394] duration metric: took 16.35976s to StartCluster
	I0920 10:01:44.137604    2655 settings.go:142] acquiring lock: {Name:mkc8690df96bb5b3a10e10e028bcb5cdae886c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:01:44.137795    2655 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:01:44.138448    2655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:01:44.138909    2655 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:01:44.138926    2655 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:01:44.138991    2655 addons.go:69] Setting storage-provisioner=true in profile "functional-862000"
	I0920 10:01:44.139004    2655 addons.go:234] Setting addon storage-provisioner=true in "functional-862000"
	W0920 10:01:44.139009    2655 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:01:44.139003    2655 addons.go:69] Setting default-storageclass=true in profile "functional-862000"
	I0920 10:01:44.139021    2655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-862000"
	I0920 10:01:44.139031    2655 host.go:66] Checking if "functional-862000" exists ...
	I0920 10:01:44.139091    2655 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:01:44.140842    2655 addons.go:234] Setting addon default-storageclass=true in "functional-862000"
	W0920 10:01:44.140848    2655 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:01:44.140862    2655 host.go:66] Checking if "functional-862000" exists ...
	I0920 10:01:44.143632    2655 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:01:44.144445    2655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:01:44.144459    2655 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
	I0920 10:01:44.147854    2655 out.go:177] * Verifying Kubernetes components...
	I0920 10:01:44.151950    2655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:01:44.154997    2655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:01:44.158962    2655 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:01:44.158967    2655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:01:44.158974    2655 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
	I0920 10:01:44.271373    2655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:01:44.278469    2655 node_ready.go:35] waiting up to 6m0s for node "functional-862000" to be "Ready" ...
	I0920 10:01:44.282222    2655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:01:44.292189    2655 node_ready.go:49] node "functional-862000" has status "Ready":"True"
	I0920 10:01:44.292202    2655 node_ready.go:38] duration metric: took 13.720416ms for node "functional-862000" to be "Ready" ...
	I0920 10:01:44.292206    2655 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:01:44.337104    2655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:01:44.493473    2655 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dcqt7" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.617495    2655 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0920 10:01:44.621518    2655 addons.go:510] duration metric: took 482.679625ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0920 10:01:44.893056    2655 pod_ready.go:93] pod "coredns-7c65d6cfc9-dcqt7" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:44.893068    2655 pod_ready.go:82] duration metric: took 399.656792ms for pod "coredns-7c65d6cfc9-dcqt7" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:44.893073    2655 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:45.297804    2655 pod_ready.go:93] pod "etcd-functional-862000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:45.297835    2655 pod_ready.go:82] duration metric: took 404.820292ms for pod "etcd-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:45.297856    2655 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:45.695679    2655 pod_ready.go:93] pod "kube-apiserver-functional-862000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:45.695707    2655 pod_ready.go:82] duration metric: took 397.905167ms for pod "kube-apiserver-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:45.695733    2655 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:46.095915    2655 pod_ready.go:93] pod "kube-controller-manager-functional-862000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:46.095944    2655 pod_ready.go:82] duration metric: took 400.263584ms for pod "kube-controller-manager-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:46.095963    2655 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-twdlr" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:46.496835    2655 pod_ready.go:93] pod "kube-proxy-twdlr" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:46.496865    2655 pod_ready.go:82] duration metric: took 400.948875ms for pod "kube-proxy-twdlr" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:46.496884    2655 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:46.897053    2655 pod_ready.go:93] pod "kube-scheduler-functional-862000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:01:46.897080    2655 pod_ready.go:82] duration metric: took 400.244292ms for pod "kube-scheduler-functional-862000" in "kube-system" namespace to be "Ready" ...
	I0920 10:01:46.897104    2655 pod_ready.go:39] duration metric: took 2.6053135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:01:46.897151    2655 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:01:46.897462    2655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:01:46.915427    2655 api_server.go:72] duration metric: took 2.776949791s to wait for apiserver process to appear ...
	I0920 10:01:46.915443    2655 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:01:46.915465    2655 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0920 10:01:46.922680    2655 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0920 10:01:46.923677    2655 api_server.go:141] control plane version: v1.31.1
	I0920 10:01:46.923686    2655 api_server.go:131] duration metric: took 8.24ms to wait for apiserver health ...
	I0920 10:01:46.923692    2655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 10:01:47.095946    2655 system_pods.go:59] 7 kube-system pods found
	I0920 10:01:47.095959    2655 system_pods.go:61] "coredns-7c65d6cfc9-dcqt7" [6f81cdc3-a6be-4d33-82ce-b8b76c1b79cb] Running
	I0920 10:01:47.095962    2655 system_pods.go:61] "etcd-functional-862000" [ea8f1ef2-779f-4f09-822e-e42e2c0b2bb3] Running
	I0920 10:01:47.095965    2655 system_pods.go:61] "kube-apiserver-functional-862000" [e7bf1e18-c7ed-46d8-a38b-0308e020bd5b] Running
	I0920 10:01:47.095968    2655 system_pods.go:61] "kube-controller-manager-functional-862000" [c5a28450-509a-4eff-8c98-bde5f3fb652c] Running
	I0920 10:01:47.095970    2655 system_pods.go:61] "kube-proxy-twdlr" [0359972a-9968-4fea-a24b-07be8a873fa3] Running
	I0920 10:01:47.095972    2655 system_pods.go:61] "kube-scheduler-functional-862000" [7a9fd08a-2b06-4fd6-afa8-e6df0c68f35a] Running
	I0920 10:01:47.095974    2655 system_pods.go:61] "storage-provisioner" [8744e522-77b1-4718-ba24-5d386614ba97] Running
	I0920 10:01:47.095979    2655 system_pods.go:74] duration metric: took 172.30875ms to wait for pod list to return data ...
	I0920 10:01:47.095984    2655 default_sa.go:34] waiting for default service account to be created ...
	I0920 10:01:47.296984    2655 default_sa.go:45] found service account: "default"
	I0920 10:01:47.297004    2655 default_sa.go:55] duration metric: took 201.043125ms for default service account to be created ...
	I0920 10:01:47.297019    2655 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 10:01:47.498669    2655 system_pods.go:86] 7 kube-system pods found
	I0920 10:01:47.498685    2655 system_pods.go:89] "coredns-7c65d6cfc9-dcqt7" [6f81cdc3-a6be-4d33-82ce-b8b76c1b79cb] Running
	I0920 10:01:47.498692    2655 system_pods.go:89] "etcd-functional-862000" [ea8f1ef2-779f-4f09-822e-e42e2c0b2bb3] Running
	I0920 10:01:47.498697    2655 system_pods.go:89] "kube-apiserver-functional-862000" [e7bf1e18-c7ed-46d8-a38b-0308e020bd5b] Running
	I0920 10:01:47.498700    2655 system_pods.go:89] "kube-controller-manager-functional-862000" [c5a28450-509a-4eff-8c98-bde5f3fb652c] Running
	I0920 10:01:47.498703    2655 system_pods.go:89] "kube-proxy-twdlr" [0359972a-9968-4fea-a24b-07be8a873fa3] Running
	I0920 10:01:47.498706    2655 system_pods.go:89] "kube-scheduler-functional-862000" [7a9fd08a-2b06-4fd6-afa8-e6df0c68f35a] Running
	I0920 10:01:47.498711    2655 system_pods.go:89] "storage-provisioner" [8744e522-77b1-4718-ba24-5d386614ba97] Running
	I0920 10:01:47.498718    2655 system_pods.go:126] duration metric: took 201.72325ms to wait for k8s-apps to be running ...
	I0920 10:01:47.498725    2655 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 10:01:47.498871    2655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:01:47.512704    2655 system_svc.go:56] duration metric: took 13.971958ms WaitForService to wait for kubelet
	I0920 10:01:47.512716    2655 kubeadm.go:582] duration metric: took 3.374336666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:01:47.512732    2655 node_conditions.go:102] verifying NodePressure condition ...
	I0920 10:01:47.698099    2655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 10:01:47.698123    2655 node_conditions.go:123] node cpu capacity is 2
	I0920 10:01:47.698149    2655 node_conditions.go:105] duration metric: took 185.43425ms to run NodePressure ...
	I0920 10:01:47.698174    2655 start.go:241] waiting for startup goroutines ...
	I0920 10:01:47.698187    2655 start.go:246] waiting for cluster config update ...
	I0920 10:01:47.698206    2655 start.go:255] writing updated cluster config ...
	I0920 10:01:47.699404    2655 ssh_runner.go:195] Run: rm -f paused
	I0920 10:01:47.764041    2655 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0920 10:01:47.768043    2655 out.go:201] 
	W0920 10:01:47.772256    2655 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0920 10:01:47.775217    2655 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0920 10:01:47.783199    2655 out.go:177] * Done! kubectl is now configured to use "functional-862000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 17:02:27 functional-862000 dockerd[6004]: time="2024-09-20T17:02:27.968828154Z" level=warning msg="cleaning up after shim disconnected" id=7f5ca83a5fe0c4a83a942af9ab87ba3bfdbb92208cde2fd92df1cae9ea982214 namespace=moby
	Sep 20 17:02:27 functional-862000 dockerd[6004]: time="2024-09-20T17:02:27.968846357Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 17:02:27 functional-862000 dockerd[6004]: time="2024-09-20T17:02:27.974026563Z" level=warning msg="cleanup warnings time=\"2024-09-20T17:02:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 20 17:02:36 functional-862000 dockerd[6004]: time="2024-09-20T17:02:36.534530362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 20 17:02:36 functional-862000 dockerd[6004]: time="2024-09-20T17:02:36.534580139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 20 17:02:36 functional-862000 dockerd[6004]: time="2024-09-20T17:02:36.534588761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:02:36 functional-862000 dockerd[6004]: time="2024-09-20T17:02:36.534788578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:02:36 functional-862000 cri-dockerd[6252]: time="2024-09-20T17:02:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c00e46c568c325490375f61199020468c5893396828df3b20f254ac3f161bfe7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 20 17:02:38 functional-862000 dockerd[6004]: time="2024-09-20T17:02:38.942883826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 20 17:02:38 functional-862000 dockerd[6004]: time="2024-09-20T17:02:38.942919150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 20 17:02:38 functional-862000 dockerd[6004]: time="2024-09-20T17:02:38.942929522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:02:38 functional-862000 dockerd[6004]: time="2024-09-20T17:02:38.942964637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:02:38 functional-862000 dockerd[5997]: time="2024-09-20T17:02:38.972568244Z" level=info msg="ignoring event" container=b3ad276c09cfd197287ec0e21cbbdcb6172ddf26b013639110eedd3ba18a54ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:02:38 functional-862000 dockerd[6004]: time="2024-09-20T17:02:38.973515975Z" level=info msg="shim disconnected" id=b3ad276c09cfd197287ec0e21cbbdcb6172ddf26b013639110eedd3ba18a54ff namespace=moby
	Sep 20 17:02:38 functional-862000 dockerd[6004]: time="2024-09-20T17:02:38.973546550Z" level=warning msg="cleaning up after shim disconnected" id=b3ad276c09cfd197287ec0e21cbbdcb6172ddf26b013639110eedd3ba18a54ff namespace=moby
	Sep 20 17:02:38 functional-862000 dockerd[6004]: time="2024-09-20T17:02:38.973566669Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 17:02:40 functional-862000 cri-dockerd[6252]: time="2024-09-20T17:02:40Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 20 17:02:40 functional-862000 dockerd[6004]: time="2024-09-20T17:02:40.419934761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 20 17:02:40 functional-862000 dockerd[6004]: time="2024-09-20T17:02:40.419969959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 20 17:02:40 functional-862000 dockerd[6004]: time="2024-09-20T17:02:40.420216681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:02:40 functional-862000 dockerd[6004]: time="2024-09-20T17:02:40.420248672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:02:40 functional-862000 dockerd[5997]: time="2024-09-20T17:02:40.454877841Z" level=info msg="ignoring event" container=21b91a13a76e289f93874337146c10543085609eac0a90dd5d202e15ad85a14d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:02:40 functional-862000 dockerd[6004]: time="2024-09-20T17:02:40.454942823Z" level=info msg="shim disconnected" id=21b91a13a76e289f93874337146c10543085609eac0a90dd5d202e15ad85a14d namespace=moby
	Sep 20 17:02:40 functional-862000 dockerd[6004]: time="2024-09-20T17:02:40.454972148Z" level=warning msg="cleaning up after shim disconnected" id=21b91a13a76e289f93874337146c10543085609eac0a90dd5d202e15ad85a14d namespace=moby
	Sep 20 17:02:40 functional-862000 dockerd[6004]: time="2024-09-20T17:02:40.454976230Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	21b91a13a76e2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   1 second ago         Exited              mount-munger              0                   c00e46c568c32       busybox-mount
	b3ad276c09cfd       72565bf5bbedf                                                                                         3 seconds ago        Exited              echoserver-arm            3                   c9f05e30ebf9d       hello-node-64b4f8f9ff-7t27l
	aa183b2f3ee1c       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         14 seconds ago       Running             myfrontend                0                   5b4ea713a2a90       sp-pod
	7f5ca83a5fe0c       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   2f1dc8c0b277d       hello-node-connect-65d86f57f4-q5dg2
	f11d8d2565654       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         37 seconds ago       Running             nginx                     0                   4dc966fb76562       nginx-svc
	903e1eb6a1288       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   dc7042321c84c       coredns-7c65d6cfc9-dcqt7
	ea5641c526895       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   a3664404bdd12       storage-provisioner
	fd80fd42e9ed5       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   5082e15e79d2f       kube-proxy-twdlr
	ab35cd65dd76b       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   47fe68f050927       etcd-functional-862000
	1883a55232012       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   9199545f135a1       kube-controller-manager-functional-862000
	29adc547fd155       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   25d551c480709       kube-scheduler-functional-862000
	fbf933224beae       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   10fe13e95c9da       kube-apiserver-functional-862000
	b173b635884d9       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   fcd7a435b86dc       coredns-7c65d6cfc9-dcqt7
	e25cf87c76e9c       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   e49b88adf93dc       storage-provisioner
	6f840adca3b34       24a140c548c07                                                                                         About a minute ago   Exited              kube-proxy                1                   b5c8f3b2b164a       kube-proxy-twdlr
	3d345820b8288       279f381cb3736                                                                                         About a minute ago   Exited              kube-controller-manager   1                   7fbf94f08bfec       kube-controller-manager-functional-862000
	520cbf52855b8       7f8aa378bb47d                                                                                         About a minute ago   Exited              kube-scheduler            1                   3ba3865342540       kube-scheduler-functional-862000
	30b4dfa1d85cd       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   f9e2ba7aaa598       etcd-functional-862000
	
	
	==> coredns [903e1eb6a128] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38426 - 61303 "HINFO IN 4463802882389019001.3663176095805845921. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015290653s
	[INFO] 10.244.0.1:21927 - 35647 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000098512s
	[INFO] 10.244.0.1:62606 - 45806 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000108134s
	[INFO] 10.244.0.1:1482 - 1133 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00143727s
	[INFO] 10.244.0.1:31473 - 54612 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000050693s
	[INFO] 10.244.0.1:56538 - 32856 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000115256s
	[INFO] 10.244.0.1:3223 - 5612 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000138416s
	
	
	==> coredns [b173b635884d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45024 - 19432 "HINFO IN 257640525264116068.7344163509830413365. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.165969538s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-862000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-862000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=functional-862000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T09_59_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 16:59:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-862000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:02:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:02:32 +0000   Fri, 20 Sep 2024 16:59:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:02:32 +0000   Fri, 20 Sep 2024 16:59:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:02:32 +0000   Fri, 20 Sep 2024 16:59:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:02:32 +0000   Fri, 20 Sep 2024 16:59:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-862000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2e73eb45aaf427fb2affd06bf2ca67d
	  System UUID:                f2e73eb45aaf427fb2affd06bf2ca67d
	  Boot ID:                    a134cf35-7cdd-4648-b5af-bc480de9f786
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  default                     hello-node-64b4f8f9ff-7t27l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     hello-node-connect-65d86f57f4-q5dg2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-7c65d6cfc9-dcqt7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m51s
	  kube-system                 etcd-functional-862000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m57s
	  kube-system                 kube-apiserver-functional-862000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-functional-862000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-proxy-twdlr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-scheduler-functional-862000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m50s                kube-proxy       
	  Normal  Starting                 69s                  kube-proxy       
	  Normal  Starting                 114s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m57s                kubelet          Node functional-862000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m57s                kubelet          Node functional-862000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s                kubelet          Node functional-862000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m57s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m53s                kubelet          Node functional-862000 status is now: NodeReady
	  Normal  RegisteredNode           2m52s                node-controller  Node functional-862000 event: Registered Node functional-862000 in Controller
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node functional-862000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node functional-862000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)  kubelet          Node functional-862000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           112s                 node-controller  Node functional-862000 event: Registered Node functional-862000 in Controller
	  Normal  Starting                 73s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)    kubelet          Node functional-862000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)    kubelet          Node functional-862000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 73s)    kubelet          Node functional-862000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                  node-controller  Node functional-862000 event: Registered Node functional-862000 in Controller
	
	
	==> dmesg <==
	[  +0.900282] systemd-fstab-generator[4178]: Ignoring "noauto" option for root device
	[  +3.412493] kauditd_printk_skb: 199 callbacks suppressed
	[  +7.790712] kauditd_printk_skb: 33 callbacks suppressed
	[Sep20 17:01] systemd-fstab-generator[5080]: Ignoring "noauto" option for root device
	[ +13.060586] systemd-fstab-generator[5521]: Ignoring "noauto" option for root device
	[  +0.052948] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.111090] systemd-fstab-generator[5556]: Ignoring "noauto" option for root device
	[  +0.108551] systemd-fstab-generator[5568]: Ignoring "noauto" option for root device
	[  +0.116072] systemd-fstab-generator[5582]: Ignoring "noauto" option for root device
	[  +5.114563] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.334470] systemd-fstab-generator[6205]: Ignoring "noauto" option for root device
	[  +0.084872] systemd-fstab-generator[6217]: Ignoring "noauto" option for root device
	[  +0.096957] systemd-fstab-generator[6229]: Ignoring "noauto" option for root device
	[  +0.090422] systemd-fstab-generator[6244]: Ignoring "noauto" option for root device
	[  +0.213039] systemd-fstab-generator[6409]: Ignoring "noauto" option for root device
	[  +1.075958] systemd-fstab-generator[6531]: Ignoring "noauto" option for root device
	[  +3.421315] kauditd_printk_skb: 199 callbacks suppressed
	[ +12.131730] systemd-fstab-generator[7549]: Ignoring "noauto" option for root device
	[  +0.055223] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.203955] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.170576] kauditd_printk_skb: 15 callbacks suppressed
	[Sep20 17:02] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.105578] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.637789] kauditd_printk_skb: 38 callbacks suppressed
	[ +18.862741] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [30b4dfa1d85c] <==
	{"level":"info","ts":"2024-09-20T17:00:45.820654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-20T17:00:45.820725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T17:00:45.820747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-20T17:00:45.820820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T17:00:45.820939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-20T17:00:45.826431Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-862000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:00:45.826877Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:00:45.826976Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:00:45.826936Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:00:45.827408Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:00:45.828909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:00:45.828909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:00:45.830805Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-20T17:00:45.831794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T17:01:14.838026Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T17:01:14.838060Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-862000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-20T17:01:14.838093Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:01:14.838135Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/09/20 17:01:14 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T17:01:14.850557Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:01:14.850580Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T17:01:14.851788Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-20T17:01:14.854837Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-20T17:01:14.854882Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-20T17:01:14.854886Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-862000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [ab35cd65dd76] <==
	{"level":"info","ts":"2024-09-20T17:01:29.665378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-20T17:01:29.665434Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:01:29.665699Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:01:29.665726Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:01:29.667449Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T17:01:29.667550Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-20T17:01:29.667572Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-20T17:01:29.667658Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T17:01:29.667755Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T17:01:30.955815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-20T17:01:30.956020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-20T17:01:30.956098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-20T17:01:30.956135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-20T17:01:30.956193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-20T17:01:30.956247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-20T17:01:30.956303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-20T17:01:30.966646Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-862000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:01:30.967213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:01:30.968700Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:01:30.971353Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:01:30.973459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-20T17:01:30.973691Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:01:30.973891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:01:30.978264Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:01:30.979498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 17:02:41 up 3 min,  0 users,  load average: 1.08, 0.49, 0.20
	Linux functional-862000 5.10.207 #1 SMP PREEMPT Fri Sep 20 00:11:22 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fbf933224bea] <==
	I0920 17:01:31.576494       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 17:01:31.576503       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 17:01:31.576746       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 17:01:31.576770       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 17:01:31.576828       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 17:01:31.577870       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 17:01:31.578140       1 aggregator.go:171] initial CRD sync complete...
	I0920 17:01:31.578174       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 17:01:31.578201       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 17:01:31.578218       1 cache.go:39] Caches are synced for autoregister controller
	I0920 17:01:31.579598       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 17:01:32.477689       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 17:01:32.581747       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0920 17:01:32.582436       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 17:01:32.980007       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 17:01:32.987908       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:01:33.010757       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:01:33.019847       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 17:01:33.021812       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 17:01:35.161706       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:01:49.286103       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.247.161"}
	I0920 17:01:55.682973       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 17:01:55.727006       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.13.79"}
	I0920 17:02:00.604904       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.244.100"}
	I0920 17:02:11.052696       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.204.49"}
	
	
	==> kube-controller-manager [1883a5523201] <==
	I0920 17:01:34.931733       1 shared_informer.go:320] Caches are synced for endpoint
	I0920 17:01:35.011356       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0920 17:01:35.060901       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 17:01:35.068987       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 17:01:35.473625       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 17:01:35.510038       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 17:01:35.510106       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 17:01:55.695935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="10.566211ms"
	I0920 17:01:55.706702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="10.743198ms"
	I0920 17:01:55.706783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="18.828µs"
	I0920 17:02:00.267471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="19.411µs"
	I0920 17:02:01.276076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="21.827µs"
	I0920 17:02:02.292151       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="35.948µs"
	I0920 17:02:11.019113       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="8.086076ms"
	I0920 17:02:11.025290       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="6.101764ms"
	I0920 17:02:11.025409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="21.494µs"
	I0920 17:02:12.454706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="51.776µs"
	I0920 17:02:13.477260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="40.613µs"
	I0920 17:02:14.522330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="43.071µs"
	I0920 17:02:14.537418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="33.49µs"
	I0920 17:02:27.892084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="61.066µs"
	I0920 17:02:28.747039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="38.739µs"
	I0920 17:02:32.759475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-862000"
	I0920 17:02:39.893164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="47.486µs"
	I0920 17:02:39.928798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="35.781µs"
	
	
	==> kube-controller-manager [3d345820b828] <==
	I0920 17:00:49.681809       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0920 17:00:49.682352       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0920 17:00:49.683489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0920 17:00:49.683506       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0920 17:00:49.685690       1 shared_informer.go:320] Caches are synced for node
	I0920 17:00:49.685704       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0920 17:00:49.685713       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0920 17:00:49.685728       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0920 17:00:49.685741       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0920 17:00:49.685774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-862000"
	I0920 17:00:49.712725       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 17:00:49.732439       1 shared_informer.go:320] Caches are synced for disruption
	I0920 17:00:49.779407       1 shared_informer.go:320] Caches are synced for deployment
	I0920 17:00:49.786584       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 17:00:49.817097       1 shared_informer.go:320] Caches are synced for persistent volume
	I0920 17:00:49.881572       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 17:00:49.902330       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 17:00:49.913574       1 shared_informer.go:320] Caches are synced for cronjob
	I0920 17:00:50.090182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="377.362536ms"
	I0920 17:00:50.090730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.48µs"
	I0920 17:00:50.333789       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 17:00:50.380765       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 17:00:50.381080       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 17:00:54.501388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.657667ms"
	I0920 17:00:54.502652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.618µs"
	
	
	==> kube-proxy [6f840adca3b3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:00:46.925048       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:00:46.929128       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0920 17:00:46.929169       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:00:46.951536       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:00:46.951566       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:00:46.951581       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:00:46.952221       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:00:46.952335       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:00:46.952346       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:00:46.952970       1 config.go:199] "Starting service config controller"
	I0920 17:00:46.952979       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:00:46.952988       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:00:46.952989       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:00:46.953122       1 config.go:328] "Starting node config controller"
	I0920 17:00:46.953129       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:00:47.053285       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:00:47.053284       1 shared_informer.go:320] Caches are synced for node config
	I0920 17:00:47.053295       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fd80fd42e9ed] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:01:32.374572       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:01:32.384236       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0920 17:01:32.384270       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:01:32.392113       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:01:32.392129       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:01:32.392140       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:01:32.392786       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:01:32.392929       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:01:32.393008       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:01:32.393436       1 config.go:199] "Starting service config controller"
	I0920 17:01:32.393482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:01:32.393496       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:01:32.393509       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:01:32.393706       1 config.go:328] "Starting node config controller"
	I0920 17:01:32.393862       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:01:32.494048       1 shared_informer.go:320] Caches are synced for node config
	I0920 17:01:32.494070       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:01:32.494274       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29adc547fd15] <==
	I0920 17:01:29.984012       1 serving.go:386] Generated self-signed cert in-memory
	W0920 17:01:31.495579       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 17:01:31.495784       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 17:01:31.495813       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 17:01:31.495847       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 17:01:31.533753       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 17:01:31.533874       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:01:31.535084       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 17:01:31.535314       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 17:01:31.535326       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 17:01:31.535334       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 17:01:31.636140       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [520cbf52855b] <==
	I0920 17:00:44.459963       1 serving.go:386] Generated self-signed cert in-memory
	W0920 17:00:46.370076       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 17:00:46.370117       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 17:00:46.370127       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 17:00:46.370134       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 17:00:46.400177       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 17:00:46.400220       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:00:46.405001       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 17:00:46.405094       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 17:00:46.405398       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 17:00:46.406187       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 17:00:46.505689       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 17:01:14.833321       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 17:02:26 functional-862000 kubelet[6538]: I0920 17:02:26.770852    6538 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6bcb406-f34a-4c3d-bda1-3b03ed3790ff" containerName="myfrontend"
	Sep 20 17:02:26 functional-862000 kubelet[6538]: I0920 17:02:26.824046    6538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcfqj\" (UniqueName: \"kubernetes.io/projected/d5b7121f-a5e4-4bf6-9f2b-317d91402c7a-kube-api-access-lcfqj\") pod \"sp-pod\" (UID: \"d5b7121f-a5e4-4bf6-9f2b-317d91402c7a\") " pod="default/sp-pod"
	Sep 20 17:02:26 functional-862000 kubelet[6538]: I0920 17:02:26.824065    6538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5adb98b6-99b1-47ab-9db5-f123f280a94f\" (UniqueName: \"kubernetes.io/host-path/d5b7121f-a5e4-4bf6-9f2b-317d91402c7a-pvc-5adb98b6-99b1-47ab-9db5-f123f280a94f\") pod \"sp-pod\" (UID: \"d5b7121f-a5e4-4bf6-9f2b-317d91402c7a\") " pod="default/sp-pod"
	Sep 20 17:02:26 functional-862000 kubelet[6538]: I0920 17:02:26.879621    6538 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6bcb406-f34a-4c3d-bda1-3b03ed3790ff" path="/var/lib/kubelet/pods/f6bcb406-f34a-4c3d-bda1-3b03ed3790ff/volumes"
	Sep 20 17:02:27 functional-862000 kubelet[6538]: I0920 17:02:27.875406    6538 scope.go:117] "RemoveContainer" containerID="e452aadbf833f0fe67e63a720a4d6946f83d32e89f735b187c54d01c04cf2b45"
	Sep 20 17:02:27 functional-862000 kubelet[6538]: I0920 17:02:27.876121    6538 scope.go:117] "RemoveContainer" containerID="2928e93f2c550849ba5064e2327f357811f7299678e65656c59aaf960109bac9"
	Sep 20 17:02:27 functional-862000 kubelet[6538]: E0920 17:02:27.876339    6538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-7t27l_default(420d5686-207b-47ab-bb61-f3bd1b1c41d5)\"" pod="default/hello-node-64b4f8f9ff-7t27l" podUID="420d5686-207b-47ab-bb61-f3bd1b1c41d5"
	Sep 20 17:02:28 functional-862000 kubelet[6538]: I0920 17:02:28.732276    6538 scope.go:117] "RemoveContainer" containerID="e452aadbf833f0fe67e63a720a4d6946f83d32e89f735b187c54d01c04cf2b45"
	Sep 20 17:02:28 functional-862000 kubelet[6538]: I0920 17:02:28.732669    6538 scope.go:117] "RemoveContainer" containerID="7f5ca83a5fe0c4a83a942af9ab87ba3bfdbb92208cde2fd92df1cae9ea982214"
	Sep 20 17:02:28 functional-862000 kubelet[6538]: E0920 17:02:28.732847    6538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-q5dg2_default(ca1ed39b-6e44-4691-8f4e-89fd2ea0e454)\"" pod="default/hello-node-connect-65d86f57f4-q5dg2" podUID="ca1ed39b-6e44-4691-8f4e-89fd2ea0e454"
	Sep 20 17:02:28 functional-862000 kubelet[6538]: I0920 17:02:28.762991    6538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.034668614 podStartE2EDuration="2.762972854s" podCreationTimestamp="2024-09-20 17:02:26 +0000 UTC" firstStartedPulling="2024-09-20 17:02:27.184196043 +0000 UTC m=+58.382462123" lastFinishedPulling="2024-09-20 17:02:27.912500282 +0000 UTC m=+59.110766363" observedRunningTime="2024-09-20 17:02:28.762893002 +0000 UTC m=+59.961159083" watchObservedRunningTime="2024-09-20 17:02:28.762972854 +0000 UTC m=+59.961238893"
	Sep 20 17:02:28 functional-862000 kubelet[6538]: E0920 17:02:28.877878    6538 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:02:28 functional-862000 kubelet[6538]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:02:28 functional-862000 kubelet[6538]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:02:28 functional-862000 kubelet[6538]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:02:28 functional-862000 kubelet[6538]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:02:28 functional-862000 kubelet[6538]: I0920 17:02:28.950889    6538 scope.go:117] "RemoveContainer" containerID="ce8d699b97bf738cf725273a87aca420d845018e260572168a8616c0e33f92a7"
	Sep 20 17:02:36 functional-862000 kubelet[6538]: I0920 17:02:36.206170    6538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g87mh\" (UniqueName: \"kubernetes.io/projected/b2ecb4a2-c3f7-4ad5-b933-2a7c7bc9ab43-kube-api-access-g87mh\") pod \"busybox-mount\" (UID: \"b2ecb4a2-c3f7-4ad5-b933-2a7c7bc9ab43\") " pod="default/busybox-mount"
	Sep 20 17:02:36 functional-862000 kubelet[6538]: I0920 17:02:36.206218    6538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b2ecb4a2-c3f7-4ad5-b933-2a7c7bc9ab43-test-volume\") pod \"busybox-mount\" (UID: \"b2ecb4a2-c3f7-4ad5-b933-2a7c7bc9ab43\") " pod="default/busybox-mount"
	Sep 20 17:02:38 functional-862000 kubelet[6538]: I0920 17:02:38.875411    6538 scope.go:117] "RemoveContainer" containerID="2928e93f2c550849ba5064e2327f357811f7299678e65656c59aaf960109bac9"
	Sep 20 17:02:39 functional-862000 kubelet[6538]: I0920 17:02:39.875575    6538 scope.go:117] "RemoveContainer" containerID="7f5ca83a5fe0c4a83a942af9ab87ba3bfdbb92208cde2fd92df1cae9ea982214"
	Sep 20 17:02:39 functional-862000 kubelet[6538]: E0920 17:02:39.876611    6538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-q5dg2_default(ca1ed39b-6e44-4691-8f4e-89fd2ea0e454)\"" pod="default/hello-node-connect-65d86f57f4-q5dg2" podUID="ca1ed39b-6e44-4691-8f4e-89fd2ea0e454"
	Sep 20 17:02:39 functional-862000 kubelet[6538]: I0920 17:02:39.920535    6538 scope.go:117] "RemoveContainer" containerID="2928e93f2c550849ba5064e2327f357811f7299678e65656c59aaf960109bac9"
	Sep 20 17:02:39 functional-862000 kubelet[6538]: I0920 17:02:39.920838    6538 scope.go:117] "RemoveContainer" containerID="b3ad276c09cfd197287ec0e21cbbdcb6172ddf26b013639110eedd3ba18a54ff"
	Sep 20 17:02:39 functional-862000 kubelet[6538]: E0920 17:02:39.920965    6538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-7t27l_default(420d5686-207b-47ab-bb61-f3bd1b1c41d5)\"" pod="default/hello-node-64b4f8f9ff-7t27l" podUID="420d5686-207b-47ab-bb61-f3bd1b1c41d5"
	
	
	==> storage-provisioner [e25cf87c76e9] <==
	I0920 17:00:46.910842       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:00:46.919040       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:00:46.920778       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:01:04.343549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:01:04.344246       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-862000_d05d5c99-7b9b-44e4-98db-e2a77fbbda46!
	I0920 17:01:04.346064       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2c5ed31-87f4-4cd3-9d36-6be971b69409", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-862000_d05d5c99-7b9b-44e4-98db-e2a77fbbda46 became leader
	I0920 17:01:04.444692       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-862000_d05d5c99-7b9b-44e4-98db-e2a77fbbda46!
	
	
	==> storage-provisioner [ea5641c52689] <==
	I0920 17:01:32.329536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:01:32.335552       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:01:32.335575       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:01:49.743023       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:01:49.744495       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-862000_8e460fb4-8a86-4117-b2f6-f39c5c9f7157!
	I0920 17:01:49.743391       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2c5ed31-87f4-4cd3-9d36-6be971b69409", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-862000_8e460fb4-8a86-4117-b2f6-f39c5c9f7157 became leader
	I0920 17:01:49.845167       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-862000_8e460fb4-8a86-4117-b2f6-f39c5c9f7157!
	I0920 17:02:12.395454       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0920 17:02:12.396032       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5adb98b6-99b1-47ab-9db5-f123f280a94f", APIVersion:"v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0920 17:02:12.395803       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    026d98bb-f375-4b8e-9d0e-4a203ca42cda 345 0 2024-09-20 16:59:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-20 16:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-5adb98b6-99b1-47ab-9db5-f123f280a94f &PersistentVolumeClaim{ObjectMeta:{myclaim  default  5adb98b6-99b1-47ab-9db5-f123f280a94f 748 0 2024-09-20 17:02:12 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-20 17:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-20 17:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0920 17:02:12.396745       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-5adb98b6-99b1-47ab-9db5-f123f280a94f" provisioned
	I0920 17:02:12.396763       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0920 17:02:12.396767       1 volume_store.go:212] Trying to save persistentvolume "pvc-5adb98b6-99b1-47ab-9db5-f123f280a94f"
	I0920 17:02:12.428805       1 volume_store.go:219] persistentvolume "pvc-5adb98b6-99b1-47ab-9db5-f123f280a94f" saved
	I0920 17:02:12.428908       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5adb98b6-99b1-47ab-9db5-f123f280a94f", APIVersion:"v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5adb98b6-99b1-47ab-9db5-f123f280a94f
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-862000 -n functional-862000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-862000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-862000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-862000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-862000/192.168.105.4
	Start Time:       Fri, 20 Sep 2024 10:02:36 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://21b91a13a76e289f93874337146c10543085609eac0a90dd5d202e15ad85a14d
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 20 Sep 2024 10:02:40 -0700
	      Finished:     Fri, 20 Sep 2024 10:02:40 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g87mh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-g87mh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5s    default-scheduler  Successfully assigned default/busybox-mount to functional-862000
	  Normal  Pulling    5s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.798s (3.798s including waiting). Image size: 3547125 bytes.
	  Normal  Created    1s    kubelet            Created container mount-munger
	  Normal  Started    1s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (64.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 node stop m02 -v=7 --alsologtostderr
E0920 10:07:27.019535    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-930000 node stop m02 -v=7 --alsologtostderr: (12.190613291s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr
E0920 10:07:36.619875    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr: (25.968313916s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
E0920 10:08:17.581831    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 3 (25.974777792s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 10:08:27.640084    3287 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0920 10:08:27.640098    3287 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (64.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (25.977502833s)
ha_test.go:413: expected profile "ha-930000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-930000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-930000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-930000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":
false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\
"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 3 (25.95239925s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 10:09:19.568196    3304 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0920 10:09:19.568206    3304 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (82.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-930000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.085345875s)

                                                
                                                
-- stdout --
	* Starting "ha-930000-m02" control-plane node in "ha-930000" cluster
	* Restarting existing qemu2 VM for "ha-930000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-930000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:09:19.603367    3317 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:09:19.603606    3317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:09:19.603614    3317 out.go:358] Setting ErrFile to fd 2...
	I0920 10:09:19.603617    3317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:09:19.603756    3317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:09:19.604002    3317 mustload.go:65] Loading cluster: ha-930000
	I0920 10:09:19.604258    3317 config.go:182] Loaded profile config "ha-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0920 10:09:19.604485    3317 host.go:58] "ha-930000-m02" host status: Stopped
	I0920 10:09:19.609027    3317 out.go:177] * Starting "ha-930000-m02" control-plane node in "ha-930000" cluster
	I0920 10:09:19.613174    3317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:09:19.613221    3317 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:09:19.613239    3317 cache.go:56] Caching tarball of preloaded images
	I0920 10:09:19.613354    3317 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:09:19.613364    3317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:09:19.613420    3317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/ha-930000/config.json ...
	I0920 10:09:19.614068    3317 start.go:360] acquireMachinesLock for ha-930000-m02: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:09:19.614134    3317 start.go:364] duration metric: took 34.542µs to acquireMachinesLock for "ha-930000-m02"
	I0920 10:09:19.614145    3317 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:09:19.614149    3317 fix.go:54] fixHost starting: m02
	I0920 10:09:19.614267    3317 fix.go:112] recreateIfNeeded on ha-930000-m02: state=Stopped err=<nil>
	W0920 10:09:19.614274    3317 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:09:19.617998    3317 out.go:177] * Restarting existing qemu2 VM for "ha-930000-m02" ...
	I0920 10:09:19.621072    3317 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:09:19.621156    3317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:88:33:86:70:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/disk.qcow2
	I0920 10:09:19.623500    3317 main.go:141] libmachine: STDOUT: 
	I0920 10:09:19.623517    3317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:09:19.623541    3317 fix.go:56] duration metric: took 9.391875ms for fixHost
	I0920 10:09:19.623551    3317 start.go:83] releasing machines lock for "ha-930000-m02", held for 9.4055ms
	W0920 10:09:19.623559    3317 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:09:19.623582    3317 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:09:19.623586    3317 start.go:729] Will try again in 5 seconds ...
	I0920 10:09:24.625452    3317 start.go:360] acquireMachinesLock for ha-930000-m02: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:09:24.625562    3317 start.go:364] duration metric: took 87.833µs to acquireMachinesLock for "ha-930000-m02"
	I0920 10:09:24.625591    3317 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:09:24.625595    3317 fix.go:54] fixHost starting: m02
	I0920 10:09:24.625762    3317 fix.go:112] recreateIfNeeded on ha-930000-m02: state=Stopped err=<nil>
	W0920 10:09:24.625768    3317 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:09:24.629466    3317 out.go:177] * Restarting existing qemu2 VM for "ha-930000-m02" ...
	I0920 10:09:24.633395    3317 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:09:24.633428    3317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:88:33:86:70:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/disk.qcow2
	I0920 10:09:24.635319    3317 main.go:141] libmachine: STDOUT: 
	I0920 10:09:24.635348    3317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:09:24.635373    3317 fix.go:56] duration metric: took 9.7785ms for fixHost
	I0920 10:09:24.635377    3317 start.go:83] releasing machines lock for "ha-930000-m02", held for 9.809125ms
	W0920 10:09:24.635426    3317 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:09:24.639414    3317 out.go:201] 
	W0920 10:09:24.643423    3317 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:09:24.643436    3317 out.go:270] * 
	* 
	W0920 10:09:24.645300    3317 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:09:24.649438    3317 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0920 10:09:19.603367    3317 out.go:345] Setting OutFile to fd 1 ...
I0920 10:09:19.603606    3317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:09:19.603614    3317 out.go:358] Setting ErrFile to fd 2...
I0920 10:09:19.603617    3317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:09:19.603756    3317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
I0920 10:09:19.604002    3317 mustload.go:65] Loading cluster: ha-930000
I0920 10:09:19.604258    3317 config.go:182] Loaded profile config "ha-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0920 10:09:19.604485    3317 host.go:58] "ha-930000-m02" host status: Stopped
I0920 10:09:19.609027    3317 out.go:177] * Starting "ha-930000-m02" control-plane node in "ha-930000" cluster
I0920 10:09:19.613174    3317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 10:09:19.613221    3317 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0920 10:09:19.613239    3317 cache.go:56] Caching tarball of preloaded images
I0920 10:09:19.613354    3317 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0920 10:09:19.613364    3317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0920 10:09:19.613420    3317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/ha-930000/config.json ...
I0920 10:09:19.614068    3317 start.go:360] acquireMachinesLock for ha-930000-m02: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:09:19.614134    3317 start.go:364] duration metric: took 34.542µs to acquireMachinesLock for "ha-930000-m02"
I0920 10:09:19.614145    3317 start.go:96] Skipping create...Using existing machine configuration
I0920 10:09:19.614149    3317 fix.go:54] fixHost starting: m02
I0920 10:09:19.614267    3317 fix.go:112] recreateIfNeeded on ha-930000-m02: state=Stopped err=<nil>
W0920 10:09:19.614274    3317 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:09:19.617998    3317 out.go:177] * Restarting existing qemu2 VM for "ha-930000-m02" ...
I0920 10:09:19.621072    3317 qemu.go:418] Using hvf for hardware acceleration
I0920 10:09:19.621156    3317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:88:33:86:70:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/disk.qcow2
I0920 10:09:19.623500    3317 main.go:141] libmachine: STDOUT: 
I0920 10:09:19.623517    3317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:09:19.623541    3317 fix.go:56] duration metric: took 9.391875ms for fixHost
I0920 10:09:19.623551    3317 start.go:83] releasing machines lock for "ha-930000-m02", held for 9.4055ms
W0920 10:09:19.623559    3317 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:09:19.623582    3317 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:09:19.623586    3317 start.go:729] Will try again in 5 seconds ...
I0920 10:09:24.625452    3317 start.go:360] acquireMachinesLock for ha-930000-m02: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:09:24.625562    3317 start.go:364] duration metric: took 87.833µs to acquireMachinesLock for "ha-930000-m02"
I0920 10:09:24.625591    3317 start.go:96] Skipping create...Using existing machine configuration
I0920 10:09:24.625595    3317 fix.go:54] fixHost starting: m02
I0920 10:09:24.625762    3317 fix.go:112] recreateIfNeeded on ha-930000-m02: state=Stopped err=<nil>
W0920 10:09:24.625768    3317 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:09:24.629466    3317 out.go:177] * Restarting existing qemu2 VM for "ha-930000-m02" ...
I0920 10:09:24.633395    3317 qemu.go:418] Using hvf for hardware acceleration
I0920 10:09:24.633428    3317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:88:33:86:70:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000-m02/disk.qcow2
I0920 10:09:24.635319    3317 main.go:141] libmachine: STDOUT: 
I0920 10:09:24.635348    3317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:09:24.635373    3317 fix.go:56] duration metric: took 9.7785ms for fixHost
I0920 10:09:24.635377    3317 start.go:83] releasing machines lock for "ha-930000-m02", held for 9.809125ms
W0920 10:09:24.635426    3317 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:09:24.639414    3317 out.go:201] 
W0920 10:09:24.643423    3317 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:09:24.643436    3317 out.go:270] * 
* 
W0920 10:09:24.645300    3317 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:09:24.649438    3317 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-930000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr
E0920 10:09:39.502305    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr: (25.962614708s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (25.95768625s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 3 (25.958715209s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 10:10:42.530516    3330 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0920 10:10:42.530526    3330 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (82.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-930000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-930000 -v=7 --alsologtostderr
E0920 10:11:55.617098    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:11:59.289517    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:12:23.338552    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-930000 -v=7 --alsologtostderr: (3m49.016046875s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-930000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-930000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.233280833s)

                                                
                                                
-- stdout --
	* [ha-930000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-930000" primary control-plane node in "ha-930000" cluster
	* Restarting existing qemu2 VM for "ha-930000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-930000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:14:35.234070    3375 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:14:35.234287    3375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:14:35.234296    3375 out.go:358] Setting ErrFile to fd 2...
	I0920 10:14:35.234299    3375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:14:35.234470    3375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:14:35.235631    3375 out.go:352] Setting JSON to false
	I0920 10:14:35.255447    3375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2638,"bootTime":1726849837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:14:35.255518    3375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:14:35.258190    3375 out.go:177] * [ha-930000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:14:35.266325    3375 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:14:35.266364    3375 notify.go:220] Checking for updates...
	I0920 10:14:35.274253    3375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:14:35.277262    3375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:14:35.281254    3375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:14:35.284290    3375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:14:35.287246    3375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:14:35.290584    3375 config.go:182] Loaded profile config "ha-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:14:35.290635    3375 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:14:35.295245    3375 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:14:35.302283    3375 start.go:297] selected driver: qemu2
	I0920 10:14:35.302290    3375 start.go:901] validating driver "qemu2" against &{Name:ha-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-930000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:14:35.302374    3375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:14:35.305015    3375 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:14:35.305043    3375 cni.go:84] Creating CNI manager for ""
	I0920 10:14:35.305075    3375 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 10:14:35.305116    3375 start.go:340] cluster config:
	{Name:ha-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-930000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:14:35.309152    3375 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:14:35.317219    3375 out.go:177] * Starting "ha-930000" primary control-plane node in "ha-930000" cluster
	I0920 10:14:35.321277    3375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:14:35.321299    3375 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:14:35.321318    3375 cache.go:56] Caching tarball of preloaded images
	I0920 10:14:35.321391    3375 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:14:35.321398    3375 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:14:35.321478    3375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/ha-930000/config.json ...
	I0920 10:14:35.321921    3375 start.go:360] acquireMachinesLock for ha-930000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:14:35.321957    3375 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "ha-930000"
	I0920 10:14:35.321967    3375 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:14:35.321971    3375 fix.go:54] fixHost starting: 
	I0920 10:14:35.322093    3375 fix.go:112] recreateIfNeeded on ha-930000: state=Stopped err=<nil>
	W0920 10:14:35.322102    3375 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:14:35.325194    3375 out.go:177] * Restarting existing qemu2 VM for "ha-930000" ...
	I0920 10:14:35.337044    3375 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:14:35.337083    3375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:68:89:18:2e:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/disk.qcow2
	I0920 10:14:35.339172    3375 main.go:141] libmachine: STDOUT: 
	I0920 10:14:35.339205    3375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:14:35.339237    3375 fix.go:56] duration metric: took 17.263625ms for fixHost
	I0920 10:14:35.339241    3375 start.go:83] releasing machines lock for "ha-930000", held for 17.27975ms
	W0920 10:14:35.339248    3375 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:14:35.339292    3375 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:14:35.339297    3375 start.go:729] Will try again in 5 seconds ...
	I0920 10:14:40.341351    3375 start.go:360] acquireMachinesLock for ha-930000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:14:40.341737    3375 start.go:364] duration metric: took 288.208µs to acquireMachinesLock for "ha-930000"
	I0920 10:14:40.341873    3375 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:14:40.341898    3375 fix.go:54] fixHost starting: 
	I0920 10:14:40.342549    3375 fix.go:112] recreateIfNeeded on ha-930000: state=Stopped err=<nil>
	W0920 10:14:40.342574    3375 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:14:40.350038    3375 out.go:177] * Restarting existing qemu2 VM for "ha-930000" ...
	I0920 10:14:40.354131    3375 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:14:40.354468    3375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:68:89:18:2e:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/disk.qcow2
	I0920 10:14:40.363578    3375 main.go:141] libmachine: STDOUT: 
	I0920 10:14:40.363663    3375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:14:40.363752    3375 fix.go:56] duration metric: took 21.854ms for fixHost
	I0920 10:14:40.363772    3375 start.go:83] releasing machines lock for "ha-930000", held for 22.009083ms
	W0920 10:14:40.364032    3375 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:14:40.372127    3375 out.go:201] 
	W0920 10:14:40.376278    3375 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:14:40.376359    3375 out.go:270] * 
	* 
	W0920 10:14:40.379384    3375 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:14:40.391157    3375 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-930000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-930000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 7 (33.784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-930000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.530584ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-930000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-930000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:14:40.537689    3388 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:14:40.537926    3388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:14:40.537934    3388 out.go:358] Setting ErrFile to fd 2...
	I0920 10:14:40.537937    3388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:14:40.538073    3388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:14:40.538319    3388 mustload.go:65] Loading cluster: ha-930000
	I0920 10:14:40.538565    3388 config.go:182] Loaded profile config "ha-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0920 10:14:40.538881    3388 out.go:270] ! The control-plane node ha-930000 host is not running (will try others): state=Stopped
	! The control-plane node ha-930000 host is not running (will try others): state=Stopped
	W0920 10:14:40.538991    3388 out.go:270] ! The control-plane node ha-930000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-930000-m02 host is not running (will try others): state=Stopped
	I0920 10:14:40.543656    3388 out.go:177] * The control-plane node ha-930000-m03 host is not running: state=Stopped
	I0920 10:14:40.547515    3388 out.go:177]   To start a cluster, run: "minikube start -p ha-930000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-930000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr: exit status 7 (31.136125ms)

                                                
                                                
-- stdout --
	ha-930000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:14:40.580436    3390 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:14:40.580593    3390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:14:40.580597    3390 out.go:358] Setting ErrFile to fd 2...
	I0920 10:14:40.580599    3390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:14:40.580738    3390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:14:40.580864    3390 out.go:352] Setting JSON to false
	I0920 10:14:40.580877    3390 mustload.go:65] Loading cluster: ha-930000
	I0920 10:14:40.580938    3390 notify.go:220] Checking for updates...
	I0920 10:14:40.581145    3390 config.go:182] Loaded profile config "ha-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:14:40.581153    3390 status.go:174] checking status of ha-930000 ...
	I0920 10:14:40.581404    3390 status.go:364] ha-930000 host status = "Stopped" (err=<nil>)
	I0920 10:14:40.581407    3390 status.go:377] host is not running, skipping remaining checks
	I0920 10:14:40.581409    3390 status.go:176] ha-930000 status: &{Name:ha-930000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:14:40.581419    3390 status.go:174] checking status of ha-930000-m02 ...
	I0920 10:14:40.581511    3390 status.go:364] ha-930000-m02 host status = "Stopped" (err=<nil>)
	I0920 10:14:40.581514    3390 status.go:377] host is not running, skipping remaining checks
	I0920 10:14:40.581515    3390 status.go:176] ha-930000-m02 status: &{Name:ha-930000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:14:40.581519    3390 status.go:174] checking status of ha-930000-m03 ...
	I0920 10:14:40.581604    3390 status.go:364] ha-930000-m03 host status = "Stopped" (err=<nil>)
	I0920 10:14:40.581607    3390 status.go:377] host is not running, skipping remaining checks
	I0920 10:14:40.581608    3390 status.go:176] ha-930000-m03 status: &{Name:ha-930000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:14:40.581612    3390 status.go:174] checking status of ha-930000-m04 ...
	I0920 10:14:40.581707    3390 status.go:364] ha-930000-m04 host status = "Stopped" (err=<nil>)
	I0920 10:14:40.581709    3390 status.go:377] host is not running, skipping remaining checks
	I0920 10:14:40.581711    3390 status.go:176] ha-930000-m04 status: &{Name:ha-930000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 7 (30.760584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-930000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-930000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-930000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-930000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 7 (30.438917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 stop -v=7 --alsologtostderr
E0920 10:16:55.642554    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:16:59.315990    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-930000 stop -v=7 --alsologtostderr: (3m21.983378584s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr: exit status 7 (65.773083ms)

                                                
                                                
-- stdout --
	ha-930000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-930000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:18:02.775187    3434 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:18:02.775372    3434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:18:02.775377    3434 out.go:358] Setting ErrFile to fd 2...
	I0920 10:18:02.775380    3434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:18:02.775552    3434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:18:02.775711    3434 out.go:352] Setting JSON to false
	I0920 10:18:02.775727    3434 mustload.go:65] Loading cluster: ha-930000
	I0920 10:18:02.775781    3434 notify.go:220] Checking for updates...
	I0920 10:18:02.776059    3434 config.go:182] Loaded profile config "ha-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:18:02.776069    3434 status.go:174] checking status of ha-930000 ...
	I0920 10:18:02.776388    3434 status.go:364] ha-930000 host status = "Stopped" (err=<nil>)
	I0920 10:18:02.776393    3434 status.go:377] host is not running, skipping remaining checks
	I0920 10:18:02.776396    3434 status.go:176] ha-930000 status: &{Name:ha-930000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:18:02.776409    3434 status.go:174] checking status of ha-930000-m02 ...
	I0920 10:18:02.776546    3434 status.go:364] ha-930000-m02 host status = "Stopped" (err=<nil>)
	I0920 10:18:02.776550    3434 status.go:377] host is not running, skipping remaining checks
	I0920 10:18:02.776552    3434 status.go:176] ha-930000-m02 status: &{Name:ha-930000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:18:02.776557    3434 status.go:174] checking status of ha-930000-m03 ...
	I0920 10:18:02.776694    3434 status.go:364] ha-930000-m03 host status = "Stopped" (err=<nil>)
	I0920 10:18:02.776698    3434 status.go:377] host is not running, skipping remaining checks
	I0920 10:18:02.776700    3434 status.go:176] ha-930000-m03 status: &{Name:ha-930000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:18:02.776705    3434 status.go:174] checking status of ha-930000-m04 ...
	I0920 10:18:02.776831    3434 status.go:364] ha-930000-m04 host status = "Stopped" (err=<nil>)
	I0920 10:18:02.776835    3434 status.go:377] host is not running, skipping remaining checks
	I0920 10:18:02.776837    3434 status.go:176] ha-930000-m04 status: &{Name:ha-930000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": ha-930000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": ha-930000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr": ha-930000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-930000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 7 (32.472209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-930000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-930000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.181328166s)

                                                
                                                
-- stdout --
	* [ha-930000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-930000" primary control-plane node in "ha-930000" cluster
	* Restarting existing qemu2 VM for "ha-930000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-930000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:18:02.839068    3438 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:18:02.839187    3438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:18:02.839190    3438 out.go:358] Setting ErrFile to fd 2...
	I0920 10:18:02.839192    3438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:18:02.839326    3438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:18:02.840382    3438 out.go:352] Setting JSON to false
	I0920 10:18:02.856437    3438 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2845,"bootTime":1726849837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:18:02.856500    3438 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:18:02.861867    3438 out.go:177] * [ha-930000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:18:02.869839    3438 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:18:02.869884    3438 notify.go:220] Checking for updates...
	I0920 10:18:02.877778    3438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:18:02.881804    3438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:18:02.884783    3438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:18:02.887800    3438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:18:02.890926    3438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:18:02.892659    3438 config.go:182] Loaded profile config "ha-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:18:02.892917    3438 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:18:02.895788    3438 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:18:02.902673    3438 start.go:297] selected driver: qemu2
	I0920 10:18:02.902679    3438 start.go:901] validating driver "qemu2" against &{Name:ha-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-930000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:18:02.902777    3438 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:18:02.905004    3438 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:18:02.905030    3438 cni.go:84] Creating CNI manager for ""
	I0920 10:18:02.905049    3438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 10:18:02.905085    3438 start.go:340] cluster config:
	{Name:ha-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-930000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:18:02.908518    3438 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:18:02.916809    3438 out.go:177] * Starting "ha-930000" primary control-plane node in "ha-930000" cluster
	I0920 10:18:02.920765    3438 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:18:02.920780    3438 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:18:02.920786    3438 cache.go:56] Caching tarball of preloaded images
	I0920 10:18:02.920840    3438 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:18:02.920845    3438 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:18:02.920916    3438 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/ha-930000/config.json ...
	I0920 10:18:02.921374    3438 start.go:360] acquireMachinesLock for ha-930000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:18:02.921412    3438 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "ha-930000"
	I0920 10:18:02.921422    3438 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:18:02.921427    3438 fix.go:54] fixHost starting: 
	I0920 10:18:02.921548    3438 fix.go:112] recreateIfNeeded on ha-930000: state=Stopped err=<nil>
	W0920 10:18:02.921556    3438 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:18:02.924826    3438 out.go:177] * Restarting existing qemu2 VM for "ha-930000" ...
	I0920 10:18:02.932788    3438 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:18:02.932824    3438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:68:89:18:2e:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/disk.qcow2
	I0920 10:18:02.934862    3438 main.go:141] libmachine: STDOUT: 
	I0920 10:18:02.934883    3438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:18:02.934911    3438 fix.go:56] duration metric: took 13.483583ms for fixHost
	I0920 10:18:02.934916    3438 start.go:83] releasing machines lock for "ha-930000", held for 13.49975ms
	W0920 10:18:02.934922    3438 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:18:02.934955    3438 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:18:02.934960    3438 start.go:729] Will try again in 5 seconds ...
	I0920 10:18:07.936980    3438 start.go:360] acquireMachinesLock for ha-930000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:18:07.937404    3438 start.go:364] duration metric: took 325.875µs to acquireMachinesLock for "ha-930000"
	I0920 10:18:07.937517    3438 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:18:07.937538    3438 fix.go:54] fixHost starting: 
	I0920 10:18:07.938233    3438 fix.go:112] recreateIfNeeded on ha-930000: state=Stopped err=<nil>
	W0920 10:18:07.938262    3438 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:18:07.942664    3438 out.go:177] * Restarting existing qemu2 VM for "ha-930000" ...
	I0920 10:18:07.949556    3438 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:18:07.949774    3438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:68:89:18:2e:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/ha-930000/disk.qcow2
	I0920 10:18:07.958713    3438 main.go:141] libmachine: STDOUT: 
	I0920 10:18:07.958785    3438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:18:07.958868    3438 fix.go:56] duration metric: took 21.330083ms for fixHost
	I0920 10:18:07.958889    3438 start.go:83] releasing machines lock for "ha-930000", held for 21.464083ms
	W0920 10:18:07.959084    3438 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:18:07.965536    3438 out.go:201] 
	W0920 10:18:07.969618    3438 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:18:07.969641    3438 out.go:270] * 
	* 
	W0920 10:18:07.972217    3438 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:18:07.979562    3438 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-930000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 7 (68.344833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-930000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-930000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-930000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-930000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 7 (30.492209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-930000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-930000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.094417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-930000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-930000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:18:08.173259    3453 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:18:08.173429    3453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:18:08.173433    3453 out.go:358] Setting ErrFile to fd 2...
	I0920 10:18:08.173435    3453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:18:08.173558    3453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:18:08.173783    3453 mustload.go:65] Loading cluster: ha-930000
	I0920 10:18:08.174026    3453 config.go:182] Loaded profile config "ha-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0920 10:18:08.174321    3453 out.go:270] ! The control-plane node ha-930000 host is not running (will try others): state=Stopped
	! The control-plane node ha-930000 host is not running (will try others): state=Stopped
	W0920 10:18:08.174425    3453 out.go:270] ! The control-plane node ha-930000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-930000-m02 host is not running (will try others): state=Stopped
	I0920 10:18:08.177122    3453 out.go:177] * The control-plane node ha-930000-m03 host is not running: state=Stopped
	I0920 10:18:08.181035    3453 out.go:177]   To start a cluster, run: "minikube start -p ha-930000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-930000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-930000 -n ha-930000: exit status 7 (30.6745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-978000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-978000 --driver=qemu2 : exit status 80 (10.029178833s)

                                                
                                                
-- stdout --
	* [image-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-978000" primary control-plane node in "image-978000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-978000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-978000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-978000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-978000 -n image-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-978000 -n image-978000: exit status 7 (68.214458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-936000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0920 10:18:22.403265    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-936000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.968852333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0965c5b1-5f5d-4938-89f9-ad82a44d4bac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-936000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"82dff7a6-2d25-43a6-af0c-cd6f757c9a72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"566e7f27-85d9-4a94-9891-08d5d0032393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig"}}
	{"specversion":"1.0","id":"83895b19-c87e-4568-bec7-6b241ca4397f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"650fa0b6-8fb4-4818-a8c2-704eb9c0dce6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c2c38653-1aec-4f32-bb11-d5eef5fc1531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube"}}
	{"specversion":"1.0","id":"61cb07ce-2128-426c-9864-4900dbd671af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"081e24d4-3afc-4c1b-be3f-ddfd28a0a24c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f74f94c-7f4b-44a8-ba42-1831c1a14f09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"afd70c17-649e-40f2-9e63-a437ea363e29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-936000\" primary control-plane node in \"json-output-936000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3449c12e-c1eb-47d5-af43-ebf71019c573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f2b2e917-e61e-42f0-8fe0-053065084764","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-936000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4f329d9-7407-49fc-ac32-e5d02e5b4361","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5921dcfd-dcf5-4ed2-8a7a-4103afb0f6a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"982b87f1-c2dc-42b1-8922-ca3a3380a5fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-936000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"08e73a53-a53e-4315-84d8-13a26ebcfc1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"9b4e48c9-3789-42d8-b5ff-8711bc9f8e6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-936000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.97s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-936000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-936000 --output=json --user=testUser: exit status 83 (79.921666ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6a14d0d-17f3-4357-a610-c140e2304913","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-936000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"4ed61e5b-0c6a-4e22-96b9-3c63bb3a45ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-936000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-936000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-936000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-936000 --output=json --user=testUser: exit status 83 (43.794083ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-936000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-936000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-936000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-936000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-013000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-013000 --driver=qemu2 : exit status 80 (9.991264375s)

                                                
                                                
-- stdout --
	* [first-013000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-013000" primary control-plane node in "first-013000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-013000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-20 10:18:42.612575 -0700 PDT m=+2124.901940876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-015000 -n second-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-015000 -n second-015000: exit status 85 (80.116959ms)

                                                
                                                
-- stdout --
	* Profile "second-015000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-015000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-015000" host is not running, skipping log retrieval (state="* Profile \"second-015000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-015000\"")
helpers_test.go:175: Cleaning up "second-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-015000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-20 10:18:42.803787 -0700 PDT m=+2125.093157251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-013000 -n first-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-013000 -n first-013000: exit status 7 (30.636958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-013000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-013000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-013000
--- FAIL: TestMinikubeProfile (10.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-908000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-908000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.195404584s)

                                                
                                                
-- stdout --
	* [mount-start-1-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-908000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-908000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-908000 -n mount-start-1-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-908000 -n mount-start-1-908000: exit status 7 (67.57925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.8919155s)

                                                
                                                
-- stdout --
	* [multinode-552000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-552000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:18:53.391170    3600 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:18:53.391301    3600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:18:53.391305    3600 out.go:358] Setting ErrFile to fd 2...
	I0920 10:18:53.391307    3600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:18:53.391438    3600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:18:53.392548    3600 out.go:352] Setting JSON to false
	I0920 10:18:53.408690    3600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2896,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:18:53.408749    3600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:18:53.415502    3600 out.go:177] * [multinode-552000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:18:53.423450    3600 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:18:53.423510    3600 notify.go:220] Checking for updates...
	I0920 10:18:53.431436    3600 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:18:53.434429    3600 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:18:53.437423    3600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:18:53.440452    3600 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:18:53.442006    3600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:18:53.445544    3600 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:18:53.449460    3600 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:18:53.455386    3600 start.go:297] selected driver: qemu2
	I0920 10:18:53.455393    3600 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:18:53.455399    3600 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:18:53.457756    3600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:18:53.460506    3600 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:18:53.463635    3600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:18:53.463667    3600 cni.go:84] Creating CNI manager for ""
	I0920 10:18:53.463696    3600 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 10:18:53.463701    3600 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 10:18:53.463733    3600 start.go:340] cluster config:
	{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:18:53.467507    3600 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:18:53.474440    3600 out.go:177] * Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	I0920 10:18:53.478411    3600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:18:53.478429    3600 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:18:53.478437    3600 cache.go:56] Caching tarball of preloaded images
	I0920 10:18:53.478507    3600 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:18:53.478514    3600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:18:53.478743    3600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/multinode-552000/config.json ...
	I0920 10:18:53.478756    3600 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/multinode-552000/config.json: {Name:mk7bcd5a6cd75f495d9d880d6805425b4dd1ecde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:18:53.479006    3600 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:18:53.479042    3600 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "multinode-552000"
	I0920 10:18:53.479056    3600 start.go:93] Provisioning new machine with config: &{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:18:53.479083    3600 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:18:53.487373    3600 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:18:53.505887    3600 start.go:159] libmachine.API.Create for "multinode-552000" (driver="qemu2")
	I0920 10:18:53.505922    3600 client.go:168] LocalClient.Create starting
	I0920 10:18:53.505981    3600 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:18:53.506014    3600 main.go:141] libmachine: Decoding PEM data...
	I0920 10:18:53.506024    3600 main.go:141] libmachine: Parsing certificate...
	I0920 10:18:53.506059    3600 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:18:53.506089    3600 main.go:141] libmachine: Decoding PEM data...
	I0920 10:18:53.506098    3600 main.go:141] libmachine: Parsing certificate...
	I0920 10:18:53.506495    3600 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:18:53.670009    3600 main.go:141] libmachine: Creating SSH key...
	I0920 10:18:53.804253    3600 main.go:141] libmachine: Creating Disk image...
	I0920 10:18:53.804260    3600 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:18:53.804464    3600 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:18:53.813874    3600 main.go:141] libmachine: STDOUT: 
	I0920 10:18:53.813889    3600 main.go:141] libmachine: STDERR: 
	I0920 10:18:53.813942    3600 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2 +20000M
	I0920 10:18:53.821774    3600 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:18:53.821796    3600 main.go:141] libmachine: STDERR: 
	I0920 10:18:53.821809    3600 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:18:53.821818    3600 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:18:53.821828    3600 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:18:53.821860    3600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:39:d2:12:f5:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:18:53.823450    3600 main.go:141] libmachine: STDOUT: 
	I0920 10:18:53.823463    3600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:18:53.823482    3600 client.go:171] duration metric: took 317.563333ms to LocalClient.Create
	I0920 10:18:55.825653    3600 start.go:128] duration metric: took 2.346599083s to createHost
	I0920 10:18:55.825756    3600 start.go:83] releasing machines lock for "multinode-552000", held for 2.346767583s
	W0920 10:18:55.825821    3600 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:18:55.840058    3600 out.go:177] * Deleting "multinode-552000" in qemu2 ...
	W0920 10:18:55.867968    3600 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:18:55.867997    3600 start.go:729] Will try again in 5 seconds ...
	I0920 10:19:00.870113    3600 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:19:00.870677    3600 start.go:364] duration metric: took 443.666µs to acquireMachinesLock for "multinode-552000"
	I0920 10:19:00.870815    3600 start.go:93] Provisioning new machine with config: &{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:19:00.871101    3600 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:19:00.891831    3600 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:19:00.946152    3600 start.go:159] libmachine.API.Create for "multinode-552000" (driver="qemu2")
	I0920 10:19:00.946202    3600 client.go:168] LocalClient.Create starting
	I0920 10:19:00.946323    3600 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:19:00.946390    3600 main.go:141] libmachine: Decoding PEM data...
	I0920 10:19:00.946410    3600 main.go:141] libmachine: Parsing certificate...
	I0920 10:19:00.946484    3600 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:19:00.946532    3600 main.go:141] libmachine: Decoding PEM data...
	I0920 10:19:00.946545    3600 main.go:141] libmachine: Parsing certificate...
	I0920 10:19:00.947070    3600 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:19:01.119585    3600 main.go:141] libmachine: Creating SSH key...
	I0920 10:19:01.182913    3600 main.go:141] libmachine: Creating Disk image...
	I0920 10:19:01.182918    3600 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:19:01.183099    3600 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:19:01.192240    3600 main.go:141] libmachine: STDOUT: 
	I0920 10:19:01.192263    3600 main.go:141] libmachine: STDERR: 
	I0920 10:19:01.192320    3600 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2 +20000M
	I0920 10:19:01.200147    3600 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:19:01.200158    3600 main.go:141] libmachine: STDERR: 
	I0920 10:19:01.200174    3600 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:19:01.200178    3600 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:19:01.200189    3600 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:19:01.200215    3600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:41:fe:68:e2:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:19:01.201842    3600 main.go:141] libmachine: STDOUT: 
	I0920 10:19:01.201855    3600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:19:01.201865    3600 client.go:171] duration metric: took 255.663958ms to LocalClient.Create
	I0920 10:19:03.203979    3600 start.go:128] duration metric: took 2.332891875s to createHost
	I0920 10:19:03.204050    3600 start.go:83] releasing machines lock for "multinode-552000", held for 2.333409792s
	W0920 10:19:03.204399    3600 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:19:03.220115    3600 out.go:201] 
	W0920 10:19:03.225122    3600 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:19:03.225159    3600 out.go:270] * 
	* 
	W0920 10:19:03.227616    3600 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:19:03.240105    3600 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-552000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (66.708458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (113.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (123.298292ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-552000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- rollout status deployment/busybox: exit status 1 (59.314375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.834916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:03.564018    1679 retry.go:31] will retry after 1.449445737s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.738417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:05.122546    1679 retry.go:31] will retry after 909.777237ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.95825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:06.138549    1679 retry.go:31] will retry after 1.502737355s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.613709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:07.751242    1679 retry.go:31] will retry after 3.389076371s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.45625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:11.249091    1679 retry.go:31] will retry after 3.472792808s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.832625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:14.829940    1679 retry.go:31] will retry after 7.797161728s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.088833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:22.733454    1679 retry.go:31] will retry after 5.72299804s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.71625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:28.564569    1679 retry.go:31] will retry after 24.45209884s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.492958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:19:53.122071    1679 retry.go:31] will retry after 24.228141258s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.673417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:20:17.457851    1679 retry.go:31] will retry after 38.592689993s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.234542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.536084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.209167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.219ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.677458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.497875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (113.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.924458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.343958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-552000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-552000 -v 3 --alsologtostderr: exit status 83 (45.683041ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-552000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-552000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:20:56.533392    3694 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:20:56.533542    3694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.533545    3694 out.go:358] Setting ErrFile to fd 2...
	I0920 10:20:56.533547    3694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.533675    3694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:20:56.533928    3694 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:20:56.534125    3694 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:20:56.540347    3694 out.go:177] * The control-plane node multinode-552000 host is not running: state=Stopped
	I0920 10:20:56.545215    3694 out.go:177]   To start a cluster, run: "minikube start -p multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-552000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.491625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-552000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-552000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.602125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-552000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-552000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-552000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.509583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-552000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-552000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-552000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-552000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.408541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status --output json --alsologtostderr: exit status 7 (30.783084ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-552000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:20:56.746839    3706 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:20:56.747007    3706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.747010    3706 out.go:358] Setting ErrFile to fd 2...
	I0920 10:20:56.747012    3706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.747157    3706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:20:56.747289    3706 out.go:352] Setting JSON to true
	I0920 10:20:56.747303    3706 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:20:56.747351    3706 notify.go:220] Checking for updates...
	I0920 10:20:56.747535    3706 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:20:56.747543    3706 status.go:174] checking status of multinode-552000 ...
	I0920 10:20:56.747780    3706 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:20:56.747784    3706 status.go:377] host is not running, skipping remaining checks
	I0920 10:20:56.747786    3706 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-552000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (29.43875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 node stop m03: exit status 85 (46.486583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-552000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status: exit status 7 (30.715792ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr: exit status 7 (29.876541ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:20:56.884175    3714 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:20:56.884352    3714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.884355    3714 out.go:358] Setting ErrFile to fd 2...
	I0920 10:20:56.884357    3714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.884479    3714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:20:56.884606    3714 out.go:352] Setting JSON to false
	I0920 10:20:56.884616    3714 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:20:56.884687    3714 notify.go:220] Checking for updates...
	I0920 10:20:56.884827    3714 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:20:56.884836    3714 status.go:174] checking status of multinode-552000 ...
	I0920 10:20:56.885085    3714 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:20:56.885088    3714 status.go:377] host is not running, skipping remaining checks
	I0920 10:20:56.885090    3714 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr": multinode-552000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.454917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (55.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.34975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:20:56.945266    3718 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:20:56.945520    3718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.945523    3718 out.go:358] Setting ErrFile to fd 2...
	I0920 10:20:56.945525    3718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.945675    3718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:20:56.945928    3718 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:20:56.946129    3718 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:20:56.950267    3718 out.go:201] 
	W0920 10:20:56.953281    3718 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0920 10:20:56.953286    3718 out.go:270] * 
	* 
	W0920 10:20:56.954955    3718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:20:56.958239    3718 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0920 10:20:56.945266    3718 out.go:345] Setting OutFile to fd 1 ...
I0920 10:20:56.945520    3718 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:20:56.945523    3718 out.go:358] Setting ErrFile to fd 2...
I0920 10:20:56.945525    3718 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:20:56.945675    3718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
I0920 10:20:56.945928    3718 mustload.go:65] Loading cluster: multinode-552000
I0920 10:20:56.946129    3718 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:20:56.950267    3718 out.go:201] 
W0920 10:20:56.953281    3718 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0920 10:20:56.953286    3718 out.go:270] * 
* 
W0920 10:20:56.954955    3718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:20:56.958239    3718 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-552000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (30.393125ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:20:56.991870    3720 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:20:56.992008    3720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.992012    3720 out.go:358] Setting ErrFile to fd 2...
	I0920 10:20:56.992014    3720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:56.992162    3720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:20:56.992282    3720 out.go:352] Setting JSON to false
	I0920 10:20:56.992291    3720 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:20:56.992355    3720 notify.go:220] Checking for updates...
	I0920 10:20:56.992514    3720 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:20:56.992522    3720 status.go:174] checking status of multinode-552000 ...
	I0920 10:20:56.992757    3720 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:20:56.992760    3720 status.go:377] host is not running, skipping remaining checks
	I0920 10:20:56.992762    3720 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:20:56.993608    1679 retry.go:31] will retry after 601.989516ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (74.157125ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:20:57.669850    3722 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:20:57.670043    3722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:57.670047    3722 out.go:358] Setting ErrFile to fd 2...
	I0920 10:20:57.670051    3722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:57.670230    3722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:20:57.670393    3722 out.go:352] Setting JSON to false
	I0920 10:20:57.670406    3722 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:20:57.670450    3722 notify.go:220] Checking for updates...
	I0920 10:20:57.670669    3722 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:20:57.670683    3722 status.go:174] checking status of multinode-552000 ...
	I0920 10:20:57.670980    3722 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:20:57.670985    3722 status.go:377] host is not running, skipping remaining checks
	I0920 10:20:57.670988    3722 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:20:57.672098    1679 retry.go:31] will retry after 2.12604s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (73.362458ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:20:59.871532    3724 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:20:59.871753    3724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:59.871757    3724 out.go:358] Setting ErrFile to fd 2...
	I0920 10:20:59.871760    3724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:20:59.871947    3724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:20:59.872092    3724 out.go:352] Setting JSON to false
	I0920 10:20:59.872106    3724 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:20:59.872154    3724 notify.go:220] Checking for updates...
	I0920 10:20:59.872383    3724 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:20:59.872394    3724 status.go:174] checking status of multinode-552000 ...
	I0920 10:20:59.872705    3724 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:20:59.872710    3724 status.go:377] host is not running, skipping remaining checks
	I0920 10:20:59.872713    3724 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:20:59.873814    1679 retry.go:31] will retry after 2.768087559s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (74.363708ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:21:02.716247    3726 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:21:02.716430    3726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:02.716434    3726 out.go:358] Setting ErrFile to fd 2...
	I0920 10:21:02.716437    3726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:02.716646    3726 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:21:02.716796    3726 out.go:352] Setting JSON to false
	I0920 10:21:02.716809    3726 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:21:02.716854    3726 notify.go:220] Checking for updates...
	I0920 10:21:02.717134    3726 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:21:02.717147    3726 status.go:174] checking status of multinode-552000 ...
	I0920 10:21:02.717456    3726 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:21:02.717462    3726 status.go:377] host is not running, skipping remaining checks
	I0920 10:21:02.717465    3726 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:21:02.718601    1679 retry.go:31] will retry after 4.322270445s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (73.050459ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:21:07.113987    3728 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:21:07.114190    3728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:07.114194    3728 out.go:358] Setting ErrFile to fd 2...
	I0920 10:21:07.114198    3728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:07.114371    3728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:21:07.114532    3728 out.go:352] Setting JSON to false
	I0920 10:21:07.114545    3728 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:21:07.114598    3728 notify.go:220] Checking for updates...
	I0920 10:21:07.114823    3728 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:21:07.114834    3728 status.go:174] checking status of multinode-552000 ...
	I0920 10:21:07.115162    3728 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:21:07.115167    3728 status.go:377] host is not running, skipping remaining checks
	I0920 10:21:07.115170    3728 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:21:07.116201    1679 retry.go:31] will retry after 3.715107323s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (75.050458ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:21:10.906383    3730 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:21:10.906578    3730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:10.906583    3730 out.go:358] Setting ErrFile to fd 2...
	I0920 10:21:10.906585    3730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:10.906764    3730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:21:10.906916    3730 out.go:352] Setting JSON to false
	I0920 10:21:10.906929    3730 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:21:10.906970    3730 notify.go:220] Checking for updates...
	I0920 10:21:10.907224    3730 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:21:10.907236    3730 status.go:174] checking status of multinode-552000 ...
	I0920 10:21:10.907553    3730 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:21:10.907558    3730 status.go:377] host is not running, skipping remaining checks
	I0920 10:21:10.907561    3730 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:21:10.908668    1679 retry.go:31] will retry after 6.330092395s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (74.396875ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:21:17.313125    3732 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:21:17.313330    3732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:17.313335    3732 out.go:358] Setting ErrFile to fd 2...
	I0920 10:21:17.313338    3732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:17.313518    3732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:21:17.313667    3732 out.go:352] Setting JSON to false
	I0920 10:21:17.313680    3732 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:21:17.313714    3732 notify.go:220] Checking for updates...
	I0920 10:21:17.313964    3732 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:21:17.313974    3732 status.go:174] checking status of multinode-552000 ...
	I0920 10:21:17.314290    3732 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:21:17.314294    3732 status.go:377] host is not running, skipping remaining checks
	I0920 10:21:17.314297    3732 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:21:17.315328    1679 retry.go:31] will retry after 12.518158194s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (73.1695ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:21:29.906362    3735 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:21:29.906565    3735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:29.906569    3735 out.go:358] Setting ErrFile to fd 2...
	I0920 10:21:29.906572    3735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:29.906771    3735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:21:29.906930    3735 out.go:352] Setting JSON to false
	I0920 10:21:29.906943    3735 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:21:29.906979    3735 notify.go:220] Checking for updates...
	I0920 10:21:29.907227    3735 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:21:29.907241    3735 status.go:174] checking status of multinode-552000 ...
	I0920 10:21:29.907589    3735 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:21:29.907594    3735 status.go:377] host is not running, skipping remaining checks
	I0920 10:21:29.907596    3735 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:21:29.908716    1679 retry.go:31] will retry after 22.670888276s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (74.170208ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:21:52.653384    3744 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:21:52.653593    3744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:52.653598    3744 out.go:358] Setting ErrFile to fd 2...
	I0920 10:21:52.653601    3744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:52.653790    3744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:21:52.653978    3744 out.go:352] Setting JSON to false
	I0920 10:21:52.653990    3744 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:21:52.654039    3744 notify.go:220] Checking for updates...
	I0920 10:21:52.654268    3744 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:21:52.654279    3744 status.go:174] checking status of multinode-552000 ...
	I0920 10:21:52.654638    3744 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:21:52.654643    3744 status.go:377] host is not running, skipping remaining checks
	I0920 10:21:52.654646    3744 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (33.249833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (55.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-552000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-552000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-552000: (2.137186042s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr
E0920 10:21:55.641529    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:21:59.313906    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.2178265s)

                                                
                                                
-- stdout --
	* [multinode-552000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	* Restarting existing qemu2 VM for "multinode-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:21:54.920146    3762 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:21:54.920308    3762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:54.920312    3762 out.go:358] Setting ErrFile to fd 2...
	I0920 10:21:54.920315    3762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:21:54.920465    3762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:21:54.921661    3762 out.go:352] Setting JSON to false
	I0920 10:21:54.941211    3762 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3077,"bootTime":1726849837,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:21:54.941291    3762 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:21:54.945646    3762 out.go:177] * [multinode-552000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:21:54.952644    3762 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:21:54.952675    3762 notify.go:220] Checking for updates...
	I0920 10:21:54.959554    3762 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:21:54.962624    3762 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:21:54.965588    3762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:21:54.968586    3762 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:21:54.971619    3762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:21:54.973274    3762 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:21:54.973338    3762 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:21:54.977576    3762 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:21:54.984448    3762 start.go:297] selected driver: qemu2
	I0920 10:21:54.984454    3762 start.go:901] validating driver "qemu2" against &{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:21:54.984504    3762 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:21:54.986797    3762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:21:54.986823    3762 cni.go:84] Creating CNI manager for ""
	I0920 10:21:54.986848    3762 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:21:54.986916    3762 start.go:340] cluster config:
	{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-552000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:21:54.990628    3762 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:21:54.998742    3762 out.go:177] * Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	I0920 10:21:55.002573    3762 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:21:55.002593    3762 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:21:55.002601    3762 cache.go:56] Caching tarball of preloaded images
	I0920 10:21:55.002678    3762 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:21:55.002684    3762 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:21:55.002753    3762 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/multinode-552000/config.json ...
	I0920 10:21:55.003219    3762 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:21:55.003255    3762 start.go:364] duration metric: took 29.166µs to acquireMachinesLock for "multinode-552000"
	I0920 10:21:55.003265    3762 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:21:55.003269    3762 fix.go:54] fixHost starting: 
	I0920 10:21:55.003398    3762 fix.go:112] recreateIfNeeded on multinode-552000: state=Stopped err=<nil>
	W0920 10:21:55.003406    3762 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:21:55.011502    3762 out.go:177] * Restarting existing qemu2 VM for "multinode-552000" ...
	I0920 10:21:55.015568    3762 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:21:55.015604    3762 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:41:fe:68:e2:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:21:55.017641    3762 main.go:141] libmachine: STDOUT: 
	I0920 10:21:55.017664    3762 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:21:55.017693    3762 fix.go:56] duration metric: took 14.421708ms for fixHost
	I0920 10:21:55.017697    3762 start.go:83] releasing machines lock for "multinode-552000", held for 14.438125ms
	W0920 10:21:55.017703    3762 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:21:55.017739    3762 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:21:55.017744    3762 start.go:729] Will try again in 5 seconds ...
	I0920 10:22:00.020421    3762 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:22:00.020798    3762 start.go:364] duration metric: took 304.083µs to acquireMachinesLock for "multinode-552000"
	I0920 10:22:00.020886    3762 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:22:00.020895    3762 fix.go:54] fixHost starting: 
	I0920 10:22:00.021304    3762 fix.go:112] recreateIfNeeded on multinode-552000: state=Stopped err=<nil>
	W0920 10:22:00.021324    3762 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:22:00.025706    3762 out.go:177] * Restarting existing qemu2 VM for "multinode-552000" ...
	I0920 10:22:00.029597    3762 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:22:00.029757    3762 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:41:fe:68:e2:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:22:00.039737    3762 main.go:141] libmachine: STDOUT: 
	I0920 10:22:00.039832    3762 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:22:00.039936    3762 fix.go:56] duration metric: took 19.03375ms for fixHost
	I0920 10:22:00.039960    3762 start.go:83] releasing machines lock for "multinode-552000", held for 19.144083ms
	W0920 10:22:00.040197    3762 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:22:00.047604    3762 out.go:201] 
	W0920 10:22:00.051642    3762 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:22:00.051679    3762 out.go:270] * 
	* 
	W0920 10:22:00.054012    3762 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:22:00.061598    3762 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-552000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-552000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (33.133041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 node delete m03: exit status 83 (41.267792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-552000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-552000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-552000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr: exit status 7 (30.455042ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:22:00.244099    3776 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:22:00.244248    3776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:22:00.244251    3776 out.go:358] Setting ErrFile to fd 2...
	I0920 10:22:00.244254    3776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:22:00.244393    3776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:22:00.244518    3776 out.go:352] Setting JSON to false
	I0920 10:22:00.244528    3776 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:22:00.244596    3776 notify.go:220] Checking for updates...
	I0920 10:22:00.244746    3776 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:22:00.244758    3776 status.go:174] checking status of multinode-552000 ...
	I0920 10:22:00.245016    3776 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:22:00.245019    3776 status.go:377] host is not running, skipping remaining checks
	I0920 10:22:00.245022    3776 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.371334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-552000 stop: (2.587307667s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status: exit status 7 (66.272916ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr: exit status 7 (32.775917ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:22:02.961518    3800 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:22:02.961670    3800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:22:02.961674    3800 out.go:358] Setting ErrFile to fd 2...
	I0920 10:22:02.961676    3800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:22:02.961812    3800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:22:02.961932    3800 out.go:352] Setting JSON to false
	I0920 10:22:02.961942    3800 mustload.go:65] Loading cluster: multinode-552000
	I0920 10:22:02.961999    3800 notify.go:220] Checking for updates...
	I0920 10:22:02.962157    3800 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:22:02.962165    3800 status.go:174] checking status of multinode-552000 ...
	I0920 10:22:02.962422    3800 status.go:364] multinode-552000 host status = "Stopped" (err=<nil>)
	I0920 10:22:02.962426    3800 status.go:377] host is not running, skipping remaining checks
	I0920 10:22:02.962427    3800 status.go:176] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr": multinode-552000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr": multinode-552000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.427958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.185382333s)

                                                
                                                
-- stdout --
	* [multinode-552000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	* Restarting existing qemu2 VM for "multinode-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:22:03.022072    3804 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:22:03.022200    3804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:22:03.022204    3804 out.go:358] Setting ErrFile to fd 2...
	I0920 10:22:03.022206    3804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:22:03.022335    3804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:22:03.023322    3804 out.go:352] Setting JSON to false
	I0920 10:22:03.039493    3804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3086,"bootTime":1726849837,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:22:03.039573    3804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:22:03.044303    3804 out.go:177] * [multinode-552000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:22:03.051336    3804 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:22:03.051367    3804 notify.go:220] Checking for updates...
	I0920 10:22:03.058215    3804 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:22:03.061249    3804 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:22:03.064245    3804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:22:03.067269    3804 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:22:03.070246    3804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:22:03.073569    3804 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:22:03.073864    3804 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:22:03.078233    3804 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:22:03.085159    3804 start.go:297] selected driver: qemu2
	I0920 10:22:03.085166    3804 start.go:901] validating driver "qemu2" against &{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:22:03.085228    3804 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:22:03.087551    3804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:22:03.087574    3804 cni.go:84] Creating CNI manager for ""
	I0920 10:22:03.087597    3804 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:22:03.087649    3804 start.go:340] cluster config:
	{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-552000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:22:03.091516    3804 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:03.099087    3804 out.go:177] * Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	I0920 10:22:03.103213    3804 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:22:03.103247    3804 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:22:03.103256    3804 cache.go:56] Caching tarball of preloaded images
	I0920 10:22:03.103327    3804 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:22:03.103334    3804 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:22:03.103402    3804 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/multinode-552000/config.json ...
	I0920 10:22:03.103858    3804 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:22:03.103888    3804 start.go:364] duration metric: took 23.209µs to acquireMachinesLock for "multinode-552000"
	I0920 10:22:03.103898    3804 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:22:03.103903    3804 fix.go:54] fixHost starting: 
	I0920 10:22:03.104026    3804 fix.go:112] recreateIfNeeded on multinode-552000: state=Stopped err=<nil>
	W0920 10:22:03.104035    3804 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:22:03.112182    3804 out.go:177] * Restarting existing qemu2 VM for "multinode-552000" ...
	I0920 10:22:03.116238    3804 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:22:03.116283    3804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:41:fe:68:e2:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:22:03.118550    3804 main.go:141] libmachine: STDOUT: 
	I0920 10:22:03.118568    3804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:22:03.118598    3804 fix.go:56] duration metric: took 14.69425ms for fixHost
	I0920 10:22:03.118603    3804 start.go:83] releasing machines lock for "multinode-552000", held for 14.711125ms
	W0920 10:22:03.118614    3804 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:22:03.118644    3804 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:22:03.118649    3804 start.go:729] Will try again in 5 seconds ...
	I0920 10:22:08.120774    3804 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:22:08.121284    3804 start.go:364] duration metric: took 392.333µs to acquireMachinesLock for "multinode-552000"
	I0920 10:22:08.121412    3804 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:22:08.121435    3804 fix.go:54] fixHost starting: 
	I0920 10:22:08.122188    3804 fix.go:112] recreateIfNeeded on multinode-552000: state=Stopped err=<nil>
	W0920 10:22:08.122213    3804 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:22:08.129781    3804 out.go:177] * Restarting existing qemu2 VM for "multinode-552000" ...
	I0920 10:22:08.133804    3804 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:22:08.134057    3804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:41:fe:68:e2:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/multinode-552000/disk.qcow2
	I0920 10:22:08.143954    3804 main.go:141] libmachine: STDOUT: 
	I0920 10:22:08.144015    3804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:22:08.144110    3804 fix.go:56] duration metric: took 22.675208ms for fixHost
	I0920 10:22:08.144135    3804 start.go:83] releasing machines lock for "multinode-552000", held for 22.824875ms
	W0920 10:22:08.144364    3804 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:22:08.151807    3804 out.go:201] 
	W0920 10:22:08.155676    3804 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:22:08.155833    3804 out.go:270] * 
	* 
	W0920 10:22:08.158738    3804 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:22:08.165707    3804 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (71.88125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-552000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000-m01 --driver=qemu2 : exit status 80 (10.041272541s)

                                                
                                                
-- stdout --
	* [multinode-552000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-552000-m01" primary control-plane node in "multinode-552000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-552000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000-m02 --driver=qemu2 : exit status 80 (9.861189625s)

                                                
                                                
-- stdout --
	* [multinode-552000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-552000-m02" primary control-plane node in "multinode-552000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-552000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-552000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-552000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-552000: exit status 83 (81.661459ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-552000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-552000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-552000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.177584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-764000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-764000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.852187458s)

                                                
                                                
-- stdout --
	* [test-preload-764000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-764000" primary control-plane node in "test-preload-764000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-764000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:22:28.527595    3859 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:22:28.527725    3859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:22:28.527728    3859 out.go:358] Setting ErrFile to fd 2...
	I0920 10:22:28.527731    3859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:22:28.527857    3859 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:22:28.528889    3859 out.go:352] Setting JSON to false
	I0920 10:22:28.545018    3859 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3111,"bootTime":1726849837,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:22:28.545087    3859 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:22:28.551877    3859 out.go:177] * [test-preload-764000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:22:28.559808    3859 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:22:28.559845    3859 notify.go:220] Checking for updates...
	I0920 10:22:28.566729    3859 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:22:28.569819    3859 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:22:28.572819    3859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:22:28.574423    3859 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:22:28.577755    3859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:22:28.581143    3859 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:22:28.581203    3859 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:22:28.585644    3859 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:22:28.592764    3859 start.go:297] selected driver: qemu2
	I0920 10:22:28.592769    3859 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:22:28.592775    3859 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:22:28.595153    3859 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:22:28.597844    3859 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:22:28.600873    3859 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:22:28.600907    3859 cni.go:84] Creating CNI manager for ""
	I0920 10:22:28.600931    3859 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:22:28.600936    3859 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:22:28.600964    3859 start.go:340] cluster config:
	{Name:test-preload-764000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-764000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:22:28.604746    3859 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.611760    3859 out.go:177] * Starting "test-preload-764000" primary control-plane node in "test-preload-764000" cluster
	I0920 10:22:28.615765    3859 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0920 10:22:28.615887    3859 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/test-preload-764000/config.json ...
	I0920 10:22:28.615906    3859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/test-preload-764000/config.json: {Name:mke0fc0e0401ca7ea58d393a2f4c3af1ae6823d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:22:28.615902    3859 cache.go:107] acquiring lock: {Name:mkd7684b813fc42d802e9ddf545f73e18ab1d2af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.615906    3859 cache.go:107] acquiring lock: {Name:mk1fc876a1ebcd3d56bf43c3b5d7e6b3bf67b239 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.615927    3859 cache.go:107] acquiring lock: {Name:mk4a1ccc920dbd8e4b6d69669bbe4c0a3a508771 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.616047    3859 cache.go:107] acquiring lock: {Name:mk244cc55593ba69625d2fc31da460e3811009d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.615899    3859 cache.go:107] acquiring lock: {Name:mkf3a17fb7edba2f6d9f0b5de338a2d6bf098be2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.616135    3859 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 10:22:28.616164    3859 cache.go:107] acquiring lock: {Name:mk43862e3a01765270027892aad9416119c3e4d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.616189    3859 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:22:28.616212    3859 cache.go:107] acquiring lock: {Name:mk6b7a42e9bac703869f6ac6647047c6c982cf7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.616218    3859 cache.go:107] acquiring lock: {Name:mk33a0b00cf916ac51260640d32122f55bb92194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:22:28.616150    3859 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:22:28.616363    3859 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 10:22:28.616422    3859 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 10:22:28.616464    3859 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 10:22:28.616497    3859 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:22:28.616524    3859 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:22:28.616556    3859 start.go:360] acquireMachinesLock for test-preload-764000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:22:28.616595    3859 start.go:364] duration metric: took 32.25µs to acquireMachinesLock for "test-preload-764000"
	I0920 10:22:28.616609    3859 start.go:93] Provisioning new machine with config: &{Name:test-preload-764000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-764000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:22:28.616645    3859 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:22:28.624737    3859 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:22:28.629657    3859 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 10:22:28.629699    3859 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:22:28.629708    3859 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 10:22:28.629778    3859 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:22:28.629776    3859 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:22:28.631568    3859 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 10:22:28.631569    3859 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:22:28.631644    3859 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 10:22:28.643078    3859 start.go:159] libmachine.API.Create for "test-preload-764000" (driver="qemu2")
	I0920 10:22:28.643100    3859 client.go:168] LocalClient.Create starting
	I0920 10:22:28.643164    3859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:22:28.643194    3859 main.go:141] libmachine: Decoding PEM data...
	I0920 10:22:28.643203    3859 main.go:141] libmachine: Parsing certificate...
	I0920 10:22:28.643238    3859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:22:28.643261    3859 main.go:141] libmachine: Decoding PEM data...
	I0920 10:22:28.643274    3859 main.go:141] libmachine: Parsing certificate...
	I0920 10:22:28.643624    3859 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:22:28.808665    3859 main.go:141] libmachine: Creating SSH key...
	I0920 10:22:28.867987    3859 main.go:141] libmachine: Creating Disk image...
	I0920 10:22:28.868005    3859 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:22:28.868200    3859 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2
	I0920 10:22:28.877712    3859 main.go:141] libmachine: STDOUT: 
	I0920 10:22:28.877744    3859 main.go:141] libmachine: STDERR: 
	I0920 10:22:28.877856    3859 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2 +20000M
	I0920 10:22:28.887159    3859 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:22:28.887178    3859 main.go:141] libmachine: STDERR: 
	I0920 10:22:28.887192    3859 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2
	I0920 10:22:28.887197    3859 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:22:28.887212    3859 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:22:28.887253    3859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:fe:fc:87:4d:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2
	I0920 10:22:28.889193    3859 main.go:141] libmachine: STDOUT: 
	I0920 10:22:28.889215    3859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:22:28.889235    3859 client.go:171] duration metric: took 246.135458ms to LocalClient.Create
	I0920 10:22:29.145697    3859 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:22:29.145711    3859 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0920 10:22:29.152289    3859 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0920 10:22:29.152308    3859 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:22:29.152333    3859 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:22:29.195703    3859 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0920 10:22:29.227235    3859 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:22:29.257479    3859 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0920 10:22:29.438502    3859 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0920 10:22:29.438553    3859 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 822.672583ms
	I0920 10:22:29.438616    3859 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0920 10:22:29.672003    3859 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:22:29.672106    3859 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:22:30.569641    3859 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 10:22:30.569716    3859 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.953870375s
	I0920 10:22:30.569746    3859 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 10:22:30.889511    3859 start.go:128] duration metric: took 2.272877125s to createHost
	I0920 10:22:30.889581    3859 start.go:83] releasing machines lock for "test-preload-764000", held for 2.273036583s
	W0920 10:22:30.889639    3859 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:22:30.902672    3859 out.go:177] * Deleting "test-preload-764000" in qemu2 ...
	W0920 10:22:30.935768    3859 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:22:30.935799    3859 start.go:729] Will try again in 5 seconds ...
	I0920 10:22:31.989396    3859 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0920 10:22:31.989442    3859 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.373392667s
	I0920 10:22:31.989497    3859 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0920 10:22:32.094624    3859 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0920 10:22:32.094680    3859 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.478732791s
	I0920 10:22:32.094709    3859 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0920 10:22:34.111489    3859 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0920 10:22:34.111556    3859 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.495536042s
	I0920 10:22:34.111581    3859 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0920 10:22:34.170409    3859 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0920 10:22:34.170462    3859 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.55471025s
	I0920 10:22:34.170502    3859 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0920 10:22:34.710744    3859 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0920 10:22:34.710793    3859 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.094770791s
	I0920 10:22:34.710819    3859 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0920 10:22:35.935860    3859 start.go:360] acquireMachinesLock for test-preload-764000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:22:35.936301    3859 start.go:364] duration metric: took 374.166µs to acquireMachinesLock for "test-preload-764000"
	I0920 10:22:35.936406    3859 start.go:93] Provisioning new machine with config: &{Name:test-preload-764000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-764000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:22:35.936594    3859 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:22:35.958256    3859 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:22:36.007742    3859 start.go:159] libmachine.API.Create for "test-preload-764000" (driver="qemu2")
	I0920 10:22:36.007786    3859 client.go:168] LocalClient.Create starting
	I0920 10:22:36.007903    3859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:22:36.007970    3859 main.go:141] libmachine: Decoding PEM data...
	I0920 10:22:36.007990    3859 main.go:141] libmachine: Parsing certificate...
	I0920 10:22:36.008055    3859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:22:36.008101    3859 main.go:141] libmachine: Decoding PEM data...
	I0920 10:22:36.008120    3859 main.go:141] libmachine: Parsing certificate...
	I0920 10:22:36.008657    3859 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:22:36.181962    3859 main.go:141] libmachine: Creating SSH key...
	I0920 10:22:36.271713    3859 main.go:141] libmachine: Creating Disk image...
	I0920 10:22:36.271723    3859 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:22:36.271912    3859 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2
	I0920 10:22:36.281472    3859 main.go:141] libmachine: STDOUT: 
	I0920 10:22:36.281486    3859 main.go:141] libmachine: STDERR: 
	I0920 10:22:36.281541    3859 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2 +20000M
	I0920 10:22:36.289604    3859 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:22:36.289618    3859 main.go:141] libmachine: STDERR: 
	I0920 10:22:36.289628    3859 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2
	I0920 10:22:36.289635    3859 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:22:36.289644    3859 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:22:36.289681    3859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:d9:01:46:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/test-preload-764000/disk.qcow2
	I0920 10:22:36.291450    3859 main.go:141] libmachine: STDOUT: 
	I0920 10:22:36.291464    3859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:22:36.291478    3859 client.go:171] duration metric: took 283.693334ms to LocalClient.Create
	I0920 10:22:37.530052    3859 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0920 10:22:37.530116    3859 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.914431083s
	I0920 10:22:37.530140    3859 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0920 10:22:37.530209    3859 cache.go:87] Successfully saved all images to host disk.
	I0920 10:22:38.291950    3859 start.go:128] duration metric: took 2.35538525s to createHost
	I0920 10:22:38.292017    3859 start.go:83] releasing machines lock for "test-preload-764000", held for 2.355753791s
	W0920 10:22:38.292279    3859 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-764000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-764000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:22:38.311907    3859 out.go:201] 
	W0920 10:22:38.316965    3859 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:22:38.316990    3859 out.go:270] * 
	* 
	W0920 10:22:38.319629    3859 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:22:38.335887    3859 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-764000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-20 10:22:38.353638 -0700 PDT m=+2360.649540834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-764000 -n test-preload-764000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-764000 -n test-preload-764000: exit status 7 (69.319625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-764000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-764000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-764000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-374000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-374000 --memory=2048 --driver=qemu2 : exit status 80 (9.862018166s)

                                                
                                                
-- stdout --
	* [scheduled-stop-374000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-374000" primary control-plane node in "scheduled-stop-374000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-374000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-374000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-374000" primary control-plane node in "scheduled-stop-374000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-374000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-20 10:22:48.374904 -0700 PDT m=+2370.671085834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-374000 -n scheduled-stop-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-374000 -n scheduled-stop-374000: exit status 7 (67.843958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-374000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-374000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-374000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (12.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3114115688 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3114115688 version: (1.067872916s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-789000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-789000 --memory=2600 --driver=qemu2 : exit status 80 (9.941703167s)

                                                
                                                
-- stdout --
	* [skaffold-789000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-789000" primary control-plane node in "skaffold-789000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-789000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-789000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-789000" primary control-plane node in "skaffold-789000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-789000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-20 10:23:00.930239 -0700 PDT m=+2383.226768751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-789000 -n skaffold-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-789000 -n skaffold-789000: exit status 7 (63.620125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-789000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-789000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-789000
--- FAIL: TestSkaffold (12.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (600.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2726605673 start -p running-upgrade-444000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2726605673 start -p running-upgrade-444000 --memory=2200 --vm-driver=qemu2 : (49.606614417s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-444000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-444000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m36.751042375s)

                                                
                                                
-- stdout --
	* [running-upgrade-444000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-444000" primary control-plane node in "running-upgrade-444000" cluster
	* Updating the running qemu2 "running-upgrade-444000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:24:35.108255    4251 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:24:35.108392    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:24:35.108396    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:24:35.108398    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:24:35.108535    4251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:24:35.109570    4251 out.go:352] Setting JSON to false
	I0920 10:24:35.126005    4251 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3238,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:24:35.126077    4251 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:24:35.130503    4251 out.go:177] * [running-upgrade-444000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:24:35.136491    4251 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:24:35.136560    4251 notify.go:220] Checking for updates...
	I0920 10:24:35.144345    4251 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:24:35.148425    4251 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:24:35.151405    4251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:24:35.154476    4251 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:24:35.157422    4251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:24:35.160682    4251 config.go:182] Loaded profile config "running-upgrade-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:24:35.164389    4251 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:24:35.167382    4251 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:24:35.171471    4251 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:24:35.178417    4251 start.go:297] selected driver: qemu2
	I0920 10:24:35.178422    4251 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50276 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:24:35.178477    4251 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:24:35.180760    4251 cni.go:84] Creating CNI manager for ""
	I0920 10:24:35.180793    4251 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:24:35.180822    4251 start.go:340] cluster config:
	{Name:running-upgrade-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50276 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:24:35.180869    4251 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:24:35.186339    4251 out.go:177] * Starting "running-upgrade-444000" primary control-plane node in "running-upgrade-444000" cluster
	I0920 10:24:35.190404    4251 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:24:35.190417    4251 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:24:35.190420    4251 cache.go:56] Caching tarball of preloaded images
	I0920 10:24:35.190479    4251 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:24:35.190484    4251 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:24:35.190529    4251 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/config.json ...
	I0920 10:24:35.190867    4251 start.go:360] acquireMachinesLock for running-upgrade-444000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:24:35.190899    4251 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "running-upgrade-444000"
	I0920 10:24:35.190908    4251 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:24:35.190912    4251 fix.go:54] fixHost starting: 
	I0920 10:24:35.191539    4251 fix.go:112] recreateIfNeeded on running-upgrade-444000: state=Running err=<nil>
	W0920 10:24:35.191547    4251 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:24:35.194386    4251 out.go:177] * Updating the running qemu2 "running-upgrade-444000" VM ...
	I0920 10:24:35.202242    4251 machine.go:93] provisionDockerMachine start ...
	I0920 10:24:35.202289    4251 main.go:141] libmachine: Using SSH client type: native
	I0920 10:24:35.202411    4251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101311c00] 0x101314440 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0920 10:24:35.202415    4251 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:24:35.270845    4251 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-444000
	
	I0920 10:24:35.270858    4251 buildroot.go:166] provisioning hostname "running-upgrade-444000"
	I0920 10:24:35.270902    4251 main.go:141] libmachine: Using SSH client type: native
	I0920 10:24:35.271000    4251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101311c00] 0x101314440 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0920 10:24:35.271007    4251 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-444000 && echo "running-upgrade-444000" | sudo tee /etc/hostname
	I0920 10:24:35.341258    4251 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-444000
	
	I0920 10:24:35.341318    4251 main.go:141] libmachine: Using SSH client type: native
	I0920 10:24:35.341437    4251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101311c00] 0x101314440 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0920 10:24:35.341445    4251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-444000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-444000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-444000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:24:35.410677    4251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:24:35.410688    4251 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19672-1143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19672-1143/.minikube}
	I0920 10:24:35.410696    4251 buildroot.go:174] setting up certificates
	I0920 10:24:35.410700    4251 provision.go:84] configureAuth start
	I0920 10:24:35.410705    4251 provision.go:143] copyHostCerts
	I0920 10:24:35.410774    4251 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem, removing ...
	I0920 10:24:35.410779    4251 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem
	I0920 10:24:35.410911    4251 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem (1078 bytes)
	I0920 10:24:35.411095    4251 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem, removing ...
	I0920 10:24:35.411098    4251 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem
	I0920 10:24:35.411143    4251 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem (1123 bytes)
	I0920 10:24:35.411247    4251 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem, removing ...
	I0920 10:24:35.411250    4251 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem
	I0920 10:24:35.411293    4251 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem (1679 bytes)
	I0920 10:24:35.411399    4251 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-444000 san=[127.0.0.1 localhost minikube running-upgrade-444000]
	I0920 10:24:35.567903    4251 provision.go:177] copyRemoteCerts
	I0920 10:24:35.567961    4251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:24:35.567970    4251 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/running-upgrade-444000/id_rsa Username:docker}
	I0920 10:24:35.604473    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:24:35.611888    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:24:35.618717    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 10:24:35.625806    4251 provision.go:87] duration metric: took 215.101958ms to configureAuth
	I0920 10:24:35.625815    4251 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:24:35.625927    4251 config.go:182] Loaded profile config "running-upgrade-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:24:35.625965    4251 main.go:141] libmachine: Using SSH client type: native
	I0920 10:24:35.626077    4251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101311c00] 0x101314440 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0920 10:24:35.626082    4251 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:24:35.694405    4251 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:24:35.694414    4251 buildroot.go:70] root file system type: tmpfs
	I0920 10:24:35.694468    4251 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:24:35.694519    4251 main.go:141] libmachine: Using SSH client type: native
	I0920 10:24:35.694630    4251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101311c00] 0x101314440 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0920 10:24:35.694662    4251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:24:35.766300    4251 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:24:35.766364    4251 main.go:141] libmachine: Using SSH client type: native
	I0920 10:24:35.766493    4251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101311c00] 0x101314440 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0920 10:24:35.766502    4251 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:24:35.837466    4251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:24:35.837476    4251 machine.go:96] duration metric: took 635.2465ms to provisionDockerMachine
	I0920 10:24:35.837482    4251 start.go:293] postStartSetup for "running-upgrade-444000" (driver="qemu2")
	I0920 10:24:35.837488    4251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:24:35.837562    4251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:24:35.837571    4251 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/running-upgrade-444000/id_rsa Username:docker}
	I0920 10:24:35.874463    4251 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:24:35.875787    4251 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:24:35.875794    4251 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/addons for local assets ...
	I0920 10:24:35.875868    4251 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/files for local assets ...
	I0920 10:24:35.875962    4251 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0920 10:24:35.876065    4251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:24:35.878878    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0920 10:24:35.885578    4251 start.go:296] duration metric: took 48.093333ms for postStartSetup
	I0920 10:24:35.885591    4251 fix.go:56] duration metric: took 694.699292ms for fixHost
	I0920 10:24:35.885630    4251 main.go:141] libmachine: Using SSH client type: native
	I0920 10:24:35.885739    4251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101311c00] 0x101314440 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0920 10:24:35.885744    4251 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:24:35.952774    4251 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726853075.908894888
	
	I0920 10:24:35.952784    4251 fix.go:216] guest clock: 1726853075.908894888
	I0920 10:24:35.952788    4251 fix.go:229] Guest: 2024-09-20 10:24:35.908894888 -0700 PDT Remote: 2024-09-20 10:24:35.885593 -0700 PDT m=+0.797650792 (delta=23.301888ms)
	I0920 10:24:35.952800    4251 fix.go:200] guest clock delta is within tolerance: 23.301888ms
	I0920 10:24:35.952803    4251 start.go:83] releasing machines lock for "running-upgrade-444000", held for 761.921083ms
	I0920 10:24:35.952881    4251 ssh_runner.go:195] Run: cat /version.json
	I0920 10:24:35.952895    4251 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/running-upgrade-444000/id_rsa Username:docker}
	I0920 10:24:35.953089    4251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:24:35.953110    4251 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/running-upgrade-444000/id_rsa Username:docker}
	W0920 10:24:35.953531    4251 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50244: connect: connection refused
	I0920 10:24:35.953554    4251 retry.go:31] will retry after 354.631185ms: dial tcp [::1]:50244: connect: connection refused
	W0920 10:24:36.352506    4251 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:24:36.352595    4251 ssh_runner.go:195] Run: systemctl --version
	I0920 10:24:36.355078    4251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:24:36.357095    4251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:24:36.357138    4251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:24:36.360063    4251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:24:36.364369    4251 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:24:36.364375    4251 start.go:495] detecting cgroup driver to use...
	I0920 10:24:36.364437    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:24:36.369572    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:24:36.372676    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:24:36.376227    4251 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:24:36.376254    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:24:36.379656    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:24:36.383239    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:24:36.386072    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:24:36.388986    4251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:24:36.392405    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:24:36.396017    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:24:36.399425    4251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:24:36.402509    4251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:24:36.405135    4251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:24:36.408075    4251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:24:36.497248    4251 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:24:36.507968    4251 start.go:495] detecting cgroup driver to use...
	I0920 10:24:36.508047    4251 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:24:36.513367    4251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:24:36.517803    4251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:24:36.533709    4251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:24:36.538420    4251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:24:36.542860    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:24:36.548098    4251 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:24:36.549307    4251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:24:36.552175    4251 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:24:36.557124    4251 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:24:36.651581    4251 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:24:36.752406    4251 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:24:36.752475    4251 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:24:36.757634    4251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:24:36.844988    4251 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:24:50.211997    4251 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.367364208s)
	I0920 10:24:50.212073    4251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:24:50.216461    4251 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:24:50.222622    4251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:24:50.227952    4251 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:24:50.313385    4251 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:24:50.393848    4251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:24:50.469142    4251 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:24:50.475465    4251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:24:50.479917    4251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:24:50.567479    4251 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:24:50.605876    4251 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:24:50.605961    4251 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:24:50.608078    4251 start.go:563] Will wait 60s for crictl version
	I0920 10:24:50.608141    4251 ssh_runner.go:195] Run: which crictl
	I0920 10:24:50.609499    4251 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:24:50.621696    4251 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:24:50.621774    4251 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:24:50.635278    4251 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:24:50.656219    4251 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:24:50.656366    4251 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:24:50.657659    4251 kubeadm.go:883] updating cluster {Name:running-upgrade-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50276 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:24:50.657705    4251 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:24:50.657752    4251 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:24:50.667973    4251 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:24:50.667982    4251 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:24:50.668037    4251 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:24:50.671022    4251 ssh_runner.go:195] Run: which lz4
	I0920 10:24:50.672294    4251 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:24:50.673415    4251 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:24:50.673426    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:24:51.588123    4251 docker.go:649] duration metric: took 915.90225ms to copy over tarball
	I0920 10:24:51.588200    4251 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:24:52.769135    4251 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.180951625s)
	I0920 10:24:52.769149    4251 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:24:52.785182    4251 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:24:52.788459    4251 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:24:52.793753    4251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:24:52.881778    4251 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:24:54.076894    4251 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.195132375s)
	I0920 10:24:54.076991    4251 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:24:54.088103    4251 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:24:54.088113    4251 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:24:54.088118    4251 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:24:54.093499    4251 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:24:54.095773    4251 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:24:54.097281    4251 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:24:54.097287    4251 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:24:54.099071    4251 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:24:54.099122    4251 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:24:54.100560    4251 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:24:54.100816    4251 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:24:54.101948    4251 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:24:54.101987    4251 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:24:54.103228    4251 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:24:54.103798    4251 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:24:54.104249    4251 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:24:54.104620    4251 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:24:54.105362    4251 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:24:54.105908    4251 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:24:54.501757    4251 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:24:54.514002    4251 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:24:54.514037    4251 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:24:54.514113    4251 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:24:54.525143    4251 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0920 10:24:54.531020    4251 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:24:54.531147    4251 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:24:54.538698    4251 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:24:54.540458    4251 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:24:54.541576    4251 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:24:54.541592    4251 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:24:54.541624    4251 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:24:54.555114    4251 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:24:54.555132    4251 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:24:54.555220    4251 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0920 10:24:54.556271    4251 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:24:54.556285    4251 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:24:54.556328    4251 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:24:54.559218    4251 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:24:54.559347    4251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:24:54.566040    4251 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:24:54.569130    4251 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:24:54.569249    4251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:24:54.573530    4251 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:24:54.573532    4251 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:24:54.573567    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:24:54.581061    4251 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:24:54.581084    4251 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:24:54.581157    4251 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:24:54.584036    4251 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:24:54.584062    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0920 10:24:54.592224    4251 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:24:54.603059    4251 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:24:54.603211    4251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:24:54.615331    4251 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:24:54.624536    4251 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:24:54.624562    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:24:54.625201    4251 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:24:54.625220    4251 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:24:54.625277    4251 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:24:54.649194    4251 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:24:54.649220    4251 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:24:54.649288    4251 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:24:54.673701    4251 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:24:54.673716    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0920 10:24:54.687304    4251 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:24:54.706771    4251 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:24:54.792922    4251 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 10:24:54.792950    4251 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:24:54.792957    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0920 10:24:54.856792    4251 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0920 10:24:54.902910    4251 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:24:54.903043    4251 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:24:54.940171    4251 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:24:54.940190    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0920 10:24:54.940508    4251 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:24:54.940523    4251 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:24:54.940591    4251 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:24:55.080508    4251 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 10:24:55.565994    4251 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:24:55.566507    4251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:24:55.571801    4251 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:24:55.571857    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:24:55.631849    4251 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:24:55.631864    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:24:55.865636    4251 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:24:55.865677    4251 cache_images.go:92] duration metric: took 1.777601167s to LoadCachedImages
	W0920 10:24:55.865715    4251 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0920 10:24:55.865721    4251 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:24:55.865770    4251 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-444000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:24:55.865849    4251 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:24:55.879256    4251 cni.go:84] Creating CNI manager for ""
	I0920 10:24:55.879268    4251 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:24:55.879274    4251 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:24:55.879282    4251 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-444000 NodeName:running-upgrade-444000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:24:55.879343    4251 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-444000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:24:55.879404    4251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:24:55.882214    4251 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:24:55.882247    4251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:24:55.885079    4251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:24:55.889887    4251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:24:55.894891    4251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:24:55.900193    4251 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:24:55.901480    4251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:24:55.983208    4251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:24:55.988113    4251 certs.go:68] Setting up /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000 for IP: 10.0.2.15
	I0920 10:24:55.988121    4251 certs.go:194] generating shared ca certs ...
	I0920 10:24:55.988129    4251 certs.go:226] acquiring lock for ca certs: {Name:mk7151e0388cf18b174fabc4929e6178a41b4c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:24:55.988293    4251 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key
	I0920 10:24:55.988350    4251 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key
	I0920 10:24:55.988355    4251 certs.go:256] generating profile certs ...
	I0920 10:24:55.988436    4251 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/client.key
	I0920 10:24:55.988463    4251 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.key.04427180
	I0920 10:24:55.988471    4251 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.crt.04427180 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:24:56.052852    4251 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.crt.04427180 ...
	I0920 10:24:56.052857    4251 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.crt.04427180: {Name:mkf3b9016798dee3c875f9cb3c64a2af5495dfb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:24:56.053277    4251 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.key.04427180 ...
	I0920 10:24:56.053282    4251 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.key.04427180: {Name:mkc314b48b9644634804796704bfdfd7df6d5f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:24:56.053443    4251 certs.go:381] copying /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.crt.04427180 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.crt
	I0920 10:24:56.053572    4251 certs.go:385] copying /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.key.04427180 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.key
	I0920 10:24:56.053722    4251 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/proxy-client.key
	I0920 10:24:56.053847    4251 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679.pem (1338 bytes)
	W0920 10:24:56.053876    4251 certs.go:480] ignoring /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0920 10:24:56.053880    4251 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 10:24:56.053907    4251 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:24:56.053931    4251 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:24:56.053956    4251 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem (1679 bytes)
	I0920 10:24:56.054009    4251 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0920 10:24:56.054321    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:24:56.061328    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 10:24:56.068848    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:24:56.076502    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 10:24:56.084248    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:24:56.091737    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 10:24:56.099151    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:24:56.106051    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 10:24:56.112787    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0920 10:24:56.119958    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:24:56.127161    4251 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0920 10:24:56.134099    4251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:24:56.139360    4251 ssh_runner.go:195] Run: openssl version
	I0920 10:24:56.141285    4251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0920 10:24:56.144618    4251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0920 10:24:56.146163    4251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 16:59 /usr/share/ca-certificates/16792.pem
	I0920 10:24:56.146206    4251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0920 10:24:56.148009    4251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:24:56.151232    4251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:24:56.154134    4251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:24:56.155731    4251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:24:56.155756    4251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:24:56.157617    4251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:24:56.160400    4251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0920 10:24:56.163839    4251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0920 10:24:56.165236    4251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 16:59 /usr/share/ca-certificates/1679.pem
	I0920 10:24:56.165264    4251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0920 10:24:56.167179    4251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0920 10:24:56.170116    4251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:24:56.171635    4251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:24:56.173733    4251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:24:56.175579    4251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:24:56.177750    4251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:24:56.180011    4251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:24:56.181970    4251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:24:56.183962    4251 kubeadm.go:392] StartCluster: {Name:running-upgrade-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50276 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:24:56.184032    4251 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:24:56.194174    4251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:24:56.197196    4251 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:24:56.197208    4251 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:24:56.197234    4251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:24:56.200088    4251 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:24:56.200341    4251 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-444000" does not appear in /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:24:56.200394    4251 kubeconfig.go:62] /Users/jenkins/minikube-integration/19672-1143/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-444000" cluster setting kubeconfig missing "running-upgrade-444000" context setting]
	I0920 10:24:56.200525    4251 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:24:56.201175    4251 kapi.go:59] client config for running-upgrade-444000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/client.key", CAFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1028ea030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:24:56.201504    4251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:24:56.204255    4251 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-444000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:24:56.204260    4251 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:24:56.204309    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:24:56.217873    4251 docker.go:483] Stopping containers: [e3f1210186a9 eea42d0073e1 ff40eb4b128a 0f066600e355 dd19e5d19d98 4bec9e0dfc18 0a0db24a147d 1fdd341b2f16 30d4135e2d3d 5d08422a3253 1f357b7fc58c baab7e3b70f3]
	I0920 10:24:56.217950    4251 ssh_runner.go:195] Run: docker stop e3f1210186a9 eea42d0073e1 ff40eb4b128a 0f066600e355 dd19e5d19d98 4bec9e0dfc18 0a0db24a147d 1fdd341b2f16 30d4135e2d3d 5d08422a3253 1f357b7fc58c baab7e3b70f3
	I0920 10:24:56.229001    4251 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:24:56.336228    4251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:24:56.340242    4251 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 20 17:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 20 17:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 20 17:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 20 17:24 /etc/kubernetes/scheduler.conf
	
	I0920 10:24:56.340280    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf
	I0920 10:24:56.343635    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:24:56.343668    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:24:56.347160    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf
	I0920 10:24:56.350564    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:24:56.350596    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:24:56.354069    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf
	I0920 10:24:56.356829    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:24:56.356856    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:24:56.359694    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf
	I0920 10:24:56.362790    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:24:56.362817    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:24:56.365963    4251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:24:56.368642    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:24:56.390028    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:24:57.126105    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:24:57.363688    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:24:57.385240    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:24:57.409236    4251 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:24:57.409316    4251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:24:57.911647    4251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:24:58.411374    4251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:24:58.415551    4251 api_server.go:72] duration metric: took 1.006344625s to wait for apiserver process to appear ...
	I0920 10:24:58.415559    4251 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:24:58.415569    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:03.417444    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:03.417490    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:08.417710    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:08.417812    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:13.418596    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:13.418686    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:18.420151    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:18.420259    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:23.421952    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:23.422045    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:28.424259    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:28.424376    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:33.427014    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:33.427111    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:38.429776    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:38.429878    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:43.432650    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:43.432748    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:48.435308    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:48.435403    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:53.437334    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:53.437414    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:25:58.439894    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:25:58.440257    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:25:58.468164    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:25:58.468296    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:25:58.485428    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:25:58.485517    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:25:58.500826    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:25:58.500912    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:25:58.511983    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:25:58.512081    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:25:58.522344    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:25:58.522439    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:25:58.532273    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:25:58.532356    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:25:58.542056    4251 logs.go:276] 0 containers: []
	W0920 10:25:58.542068    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:25:58.542139    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:25:58.554039    4251 logs.go:276] 0 containers: []
	W0920 10:25:58.554049    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:25:58.554057    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:25:58.554063    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:25:58.568086    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:25:58.568096    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:25:58.592166    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:25:58.592176    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:25:58.603360    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:25:58.603373    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:25:58.615102    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:25:58.615112    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:25:58.682066    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:25:58.682076    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:25:58.696963    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:25:58.696972    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:25:58.708953    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:25:58.708962    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:25:58.720802    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:25:58.720829    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:25:58.733154    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:25:58.733165    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:25:58.770289    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:25:58.770296    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:25:58.775106    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:25:58.775114    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:25:58.790371    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:25:58.790380    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:25:58.807333    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:25:58.807344    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:25:58.820119    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:25:58.820130    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:26:01.347417    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:26:06.349717    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:26:06.350287    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:26:06.384405    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:26:06.384567    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:26:06.405864    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:26:06.405981    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:26:06.420522    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:26:06.420609    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:26:06.432954    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:26:06.433052    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:26:06.446056    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:26:06.446132    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:26:06.456789    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:26:06.456870    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:26:06.467053    4251 logs.go:276] 0 containers: []
	W0920 10:26:06.467065    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:26:06.467133    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:26:06.484423    4251 logs.go:276] 0 containers: []
	W0920 10:26:06.484437    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:26:06.484444    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:26:06.484450    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:26:06.495943    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:26:06.495972    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:26:06.513432    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:26:06.513444    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:26:06.517610    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:26:06.517619    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:26:06.535109    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:26:06.535122    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:26:06.549740    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:26:06.549750    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:26:06.562034    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:26:06.562049    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:26:06.583452    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:26:06.583465    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:26:06.594994    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:26:06.595011    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:26:06.606186    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:26:06.606198    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:26:06.642257    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:26:06.642263    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:26:06.677773    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:26:06.677782    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:26:06.690373    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:26:06.690382    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:26:06.705078    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:26:06.705093    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:26:06.718163    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:26:06.718174    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:26:09.246009    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:26:14.246868    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:26:14.247414    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:26:14.282206    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:26:14.282379    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:26:14.302093    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:26:14.302226    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:26:14.317919    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:26:14.318007    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:26:14.329335    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:26:14.329412    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:26:14.340149    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:26:14.340227    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:26:14.350608    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:26:14.350693    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:26:14.360562    4251 logs.go:276] 0 containers: []
	W0920 10:26:14.360580    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:26:14.360647    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:26:14.370442    4251 logs.go:276] 0 containers: []
	W0920 10:26:14.370454    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:26:14.370462    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:26:14.370468    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:26:14.387939    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:26:14.387949    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:26:14.406061    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:26:14.406070    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:26:14.420097    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:26:14.420106    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:26:14.435351    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:26:14.435365    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:26:14.450126    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:26:14.450139    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:26:14.464771    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:26:14.464780    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:26:14.500320    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:26:14.500327    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:26:14.504447    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:26:14.504455    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:26:14.515497    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:26:14.515508    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:26:14.553061    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:26:14.553071    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:26:14.566987    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:26:14.566998    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:26:14.579170    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:26:14.579181    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:26:14.603282    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:26:14.603290    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:26:14.614618    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:26:14.614631    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:26:17.128109    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:26:22.130774    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:26:22.131350    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:26:22.164147    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:26:22.164311    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:26:22.183826    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:26:22.183944    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:26:22.198237    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:26:22.198329    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:26:22.210677    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:26:22.210758    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:26:22.220762    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:26:22.220842    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:26:22.231202    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:26:22.231284    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:26:22.241431    4251 logs.go:276] 0 containers: []
	W0920 10:26:22.241446    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:26:22.241515    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:26:22.252235    4251 logs.go:276] 0 containers: []
	W0920 10:26:22.252247    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:26:22.252255    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:26:22.252262    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:26:22.276544    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:26:22.276552    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:26:22.290179    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:26:22.290199    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:26:22.307756    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:26:22.307770    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:26:22.318730    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:26:22.318744    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:26:22.357912    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:26:22.357926    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:26:22.363024    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:26:22.363034    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:26:22.374148    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:26:22.374157    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:26:22.388601    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:26:22.388612    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:26:22.407893    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:26:22.407904    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:26:22.419997    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:26:22.420007    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:26:22.435444    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:26:22.435455    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:26:22.447835    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:26:22.447850    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:26:22.482783    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:26:22.482799    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:26:22.496796    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:26:22.496805    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:26:25.015344    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:26:30.017881    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:26:30.018399    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:26:30.066221    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:26:30.066385    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:26:30.087568    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:26:30.087669    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:26:30.101200    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:26:30.101292    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:26:30.112549    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:26:30.112636    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:26:30.124774    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:26:30.124857    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:26:30.135822    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:26:30.135898    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:26:30.146254    4251 logs.go:276] 0 containers: []
	W0920 10:26:30.146267    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:26:30.146342    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:26:30.156933    4251 logs.go:276] 0 containers: []
	W0920 10:26:30.156943    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:26:30.156950    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:26:30.156955    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:26:30.168472    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:26:30.168483    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:26:30.193293    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:26:30.193303    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:26:30.205297    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:26:30.205307    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:26:30.241179    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:26:30.241193    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:26:30.255586    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:26:30.255597    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:26:30.271872    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:26:30.271883    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:26:30.283774    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:26:30.283787    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:26:30.307938    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:26:30.307945    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:26:30.344168    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:26:30.344174    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:26:30.358944    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:26:30.358954    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:26:30.369963    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:26:30.369973    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:26:30.374641    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:26:30.374648    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:26:30.388637    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:26:30.388652    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:26:30.402392    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:26:30.402403    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:26:32.915101    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:26:37.915999    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:26:37.916631    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:26:37.956473    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:26:37.956633    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:26:37.978104    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:26:37.978218    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:26:37.993667    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:26:37.993761    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:26:38.006518    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:26:38.006613    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:26:38.016993    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:26:38.017071    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:26:38.029415    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:26:38.029490    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:26:38.039673    4251 logs.go:276] 0 containers: []
	W0920 10:26:38.039686    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:26:38.039756    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:26:38.049921    4251 logs.go:276] 0 containers: []
	W0920 10:26:38.049930    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:26:38.049939    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:26:38.049944    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:26:38.054847    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:26:38.054854    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:26:38.068554    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:26:38.068564    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:26:38.082882    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:26:38.082895    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:26:38.094871    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:26:38.094882    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:26:38.129881    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:26:38.129893    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:26:38.154391    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:26:38.154400    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:26:38.169615    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:26:38.169628    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:26:38.185110    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:26:38.185121    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:26:38.196672    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:26:38.196685    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:26:38.213574    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:26:38.213585    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:26:38.226079    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:26:38.226091    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:26:38.263161    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:26:38.263173    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:26:38.275739    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:26:38.275755    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:26:38.286870    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:26:38.286880    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:26:40.812760    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:26:45.815364    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:26:45.815974    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:26:45.855548    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:26:45.855711    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:26:45.876945    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:26:45.877057    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:26:45.891574    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:26:45.891664    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:26:45.903795    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:26:45.903877    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:26:45.914421    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:26:45.914497    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:26:45.925477    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:26:45.925558    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:26:45.935491    4251 logs.go:276] 0 containers: []
	W0920 10:26:45.935502    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:26:45.935568    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:26:45.946105    4251 logs.go:276] 0 containers: []
	W0920 10:26:45.946118    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:26:45.946127    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:26:45.946133    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:26:45.957167    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:26:45.957177    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:26:45.971605    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:26:45.971616    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:26:45.989699    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:26:45.989709    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:26:46.001504    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:26:46.001521    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:26:46.015411    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:26:46.015421    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:26:46.030682    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:26:46.030693    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:26:46.046001    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:26:46.046011    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:26:46.071090    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:26:46.071100    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:26:46.090154    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:26:46.090166    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:26:46.101901    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:26:46.101910    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:26:46.140266    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:26:46.140273    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:26:46.173497    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:26:46.173510    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:26:46.185393    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:26:46.185407    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:26:46.209550    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:26:46.209557    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:26:48.715800    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:26:53.718461    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:26:53.718551    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:26:53.733566    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:26:53.733640    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:26:53.745514    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:26:53.745592    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:26:53.755774    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:26:53.755854    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:26:53.766317    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:26:53.766393    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:26:53.777267    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:26:53.777337    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:26:53.788691    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:26:53.788759    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:26:53.798716    4251 logs.go:276] 0 containers: []
	W0920 10:26:53.798729    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:26:53.798784    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:26:53.809076    4251 logs.go:276] 0 containers: []
	W0920 10:26:53.809087    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:26:53.809093    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:26:53.809099    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:26:53.824391    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:26:53.824405    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:26:53.836278    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:26:53.836293    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:26:53.847933    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:26:53.847947    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:26:53.873144    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:26:53.873154    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:26:53.877472    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:26:53.877480    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:26:53.889061    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:26:53.889072    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:26:53.903513    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:26:53.903525    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:26:53.915219    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:26:53.915232    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:26:53.929582    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:26:53.929594    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:26:53.946749    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:26:53.946764    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:26:53.958206    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:26:53.958216    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:26:53.996234    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:26:53.996244    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:26:54.034899    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:26:54.034910    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:26:54.055490    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:26:54.055500    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:26:56.568368    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:01.571056    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:01.571633    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:01.611383    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:01.611547    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:01.631683    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:01.631797    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:01.645893    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:01.645981    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:01.658192    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:01.658269    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:01.668644    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:01.668724    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:01.678667    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:01.678751    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:01.693406    4251 logs.go:276] 0 containers: []
	W0920 10:27:01.693418    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:01.693484    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:01.704368    4251 logs.go:276] 0 containers: []
	W0920 10:27:01.704379    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:01.704387    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:01.704392    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:01.738811    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:01.738823    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:01.753187    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:01.753198    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:01.764510    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:01.764518    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:01.789033    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:01.789043    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:01.805368    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:01.805379    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:01.822666    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:01.822675    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:01.834196    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:01.834206    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:01.849589    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:01.849597    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:01.861486    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:01.861497    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:01.876875    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:01.876886    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:01.894347    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:01.894357    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:01.933689    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:01.933700    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:01.938277    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:01.938282    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:01.952534    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:01.952543    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:04.470100    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:09.472309    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:09.472818    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:09.503538    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:09.503702    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:09.526350    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:09.526465    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:09.539918    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:09.540000    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:09.553454    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:09.553545    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:09.563737    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:09.563809    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:09.576268    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:09.576338    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:09.587013    4251 logs.go:276] 0 containers: []
	W0920 10:27:09.587027    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:09.587102    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:09.597205    4251 logs.go:276] 0 containers: []
	W0920 10:27:09.597216    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:09.597224    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:09.597230    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:09.632046    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:09.632056    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:09.643811    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:09.643820    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:09.660419    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:09.660429    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:09.677920    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:09.677931    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:09.689604    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:09.689620    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:09.701732    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:09.701744    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:09.706054    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:09.706061    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:09.724722    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:09.724732    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:09.736406    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:09.736416    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:09.747562    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:09.747574    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:09.760685    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:09.760696    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:09.799132    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:09.799153    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:09.819667    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:09.819680    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:09.834109    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:09.834119    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:12.362092    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:17.364379    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:17.364806    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:17.395951    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:17.396104    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:17.413937    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:17.414052    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:17.429415    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:17.429508    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:17.444595    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:17.444676    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:17.455089    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:17.455158    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:17.465644    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:17.465711    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:17.476466    4251 logs.go:276] 0 containers: []
	W0920 10:27:17.476482    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:17.476561    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:17.487778    4251 logs.go:276] 0 containers: []
	W0920 10:27:17.487790    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:17.487800    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:17.487806    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:17.526949    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:17.526959    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:17.561006    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:17.561019    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:17.584867    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:17.584876    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:17.589127    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:17.589136    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:17.602916    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:17.602925    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:17.615399    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:17.615416    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:17.628690    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:17.628703    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:17.642909    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:17.642922    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:17.655280    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:17.655293    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:17.666652    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:17.666663    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:17.678359    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:17.678371    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:17.696087    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:17.696102    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:17.712720    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:17.712731    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:17.724653    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:17.724663    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:20.245172    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:25.246246    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:25.246387    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:25.258848    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:25.258942    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:25.274801    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:25.274887    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:25.286605    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:25.286696    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:25.299226    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:25.299314    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:25.311229    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:25.311316    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:25.323442    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:25.323535    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:25.334920    4251 logs.go:276] 0 containers: []
	W0920 10:27:25.334934    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:25.335022    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:25.346671    4251 logs.go:276] 0 containers: []
	W0920 10:27:25.346683    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:25.346692    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:25.346699    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:25.359917    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:25.359930    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:25.364596    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:25.364606    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:25.378998    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:25.379010    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:25.391909    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:25.391921    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:25.405423    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:25.405436    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:25.424644    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:25.424661    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:25.439445    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:25.439458    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:25.466154    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:25.466170    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:25.507005    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:25.507023    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:25.522980    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:25.522993    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:25.538815    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:25.538828    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:25.578866    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:25.578878    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:25.594727    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:25.594741    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:25.607882    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:25.607898    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:28.125959    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:33.128039    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:33.128277    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:33.142851    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:33.142951    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:33.155225    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:33.155313    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:33.166181    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:33.166262    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:33.177682    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:33.177768    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:33.188142    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:33.188227    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:33.199015    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:33.199098    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:33.209679    4251 logs.go:276] 0 containers: []
	W0920 10:27:33.209694    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:33.209770    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:33.220415    4251 logs.go:276] 0 containers: []
	W0920 10:27:33.220426    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:33.220435    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:33.220441    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:33.254955    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:33.254967    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:33.267103    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:33.267120    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:33.284419    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:33.284435    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:33.309348    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:33.309358    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:33.322616    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:33.322627    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:33.337333    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:33.337348    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:33.349001    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:33.349015    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:33.363620    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:33.363630    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:33.377563    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:33.377576    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:33.389747    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:33.389757    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:33.401477    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:33.401487    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:33.438634    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:33.438651    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:33.443166    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:33.443176    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:33.457064    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:33.457074    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:35.976556    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:40.978805    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:40.978949    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:40.990195    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:40.990281    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:41.002761    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:41.002852    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:41.019716    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:41.019794    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:41.030575    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:41.030663    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:41.043656    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:41.043752    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:41.058881    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:41.058969    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:41.070456    4251 logs.go:276] 0 containers: []
	W0920 10:27:41.070469    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:41.070546    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:41.081940    4251 logs.go:276] 0 containers: []
	W0920 10:27:41.081951    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:41.081959    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:41.081965    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:41.121383    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:41.121403    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:41.127131    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:41.127142    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:41.144690    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:41.144702    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:41.156671    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:41.156683    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:41.174769    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:41.174784    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:41.187345    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:41.187363    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:41.223794    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:41.223808    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:41.236224    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:41.236239    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:41.254001    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:41.254015    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:41.265910    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:41.265924    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:41.280554    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:41.280569    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:41.294995    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:41.295010    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:41.306831    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:41.306842    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:41.331458    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:41.331474    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:43.850416    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:48.852044    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:48.852592    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:48.893088    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:48.893247    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:48.915560    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:48.915676    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:48.930636    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:48.930732    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:48.943069    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:48.943167    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:48.954760    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:48.954844    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:48.966372    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:48.966456    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:48.977295    4251 logs.go:276] 0 containers: []
	W0920 10:27:48.977308    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:48.977380    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:48.987398    4251 logs.go:276] 0 containers: []
	W0920 10:27:48.987409    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:48.987416    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:48.987422    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:49.001659    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:49.001673    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:49.013440    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:49.013451    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:49.025110    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:49.025123    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:49.038049    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:49.038060    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:49.061628    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:49.061635    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:49.074528    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:49.074538    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:49.110554    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:49.110562    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:49.114747    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:49.114757    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:49.149420    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:49.149430    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:49.163777    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:49.163787    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:49.178245    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:49.178255    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:49.190264    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:49.190278    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:49.203328    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:49.203341    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:49.218827    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:49.218836    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:51.738905    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:56.741733    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:56.742388    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:56.782905    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:56.783090    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:56.804100    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:56.804216    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:56.819025    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:56.819111    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:56.831481    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:56.831563    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:56.843288    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:56.843363    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:56.853705    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:56.853781    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:56.863991    4251 logs.go:276] 0 containers: []
	W0920 10:27:56.864004    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:56.864073    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:56.877138    4251 logs.go:276] 0 containers: []
	W0920 10:27:56.877151    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:56.877159    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:56.877165    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:56.888595    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:56.888608    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:56.900366    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:56.900380    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:56.939570    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:56.939580    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:56.943948    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:56.943955    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:56.955337    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:56.955350    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:56.966780    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:56.966791    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:56.986791    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:56.986803    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:56.999003    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:56.999012    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:57.012440    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:57.012451    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:57.026624    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:57.026633    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:57.038773    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:57.038788    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:57.062657    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:57.062667    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:57.096685    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:57.096697    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:57.111906    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:57.111916    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:59.632324    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:04.634516    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:04.634747    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:04.647132    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:04.647231    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:04.657900    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:04.657990    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:04.668511    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:04.668591    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:04.679635    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:04.679712    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:04.690066    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:04.690153    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:04.701187    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:04.701291    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:04.713798    4251 logs.go:276] 0 containers: []
	W0920 10:28:04.713809    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:04.713882    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:04.723986    4251 logs.go:276] 0 containers: []
	W0920 10:28:04.723997    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:04.724005    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:04.724010    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:04.759281    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:04.759292    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:04.774360    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:04.774373    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:04.787481    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:04.787491    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:04.799529    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:04.799540    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:04.813349    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:04.813358    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:04.829049    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:04.829059    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:04.841231    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:04.841242    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:04.865130    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:04.865138    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:04.869756    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:04.869766    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:04.881417    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:04.881428    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:04.899175    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:04.899188    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:04.912689    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:04.912700    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:04.951961    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:04.951972    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:04.966147    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:04.966161    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:07.478851    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:12.481047    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:12.481490    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:12.513886    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:12.514045    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:12.533590    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:12.533689    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:12.548066    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:12.548155    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:12.559841    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:12.559926    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:12.572630    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:12.572711    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:12.583149    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:12.583228    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:12.593301    4251 logs.go:276] 0 containers: []
	W0920 10:28:12.593312    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:12.593374    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:12.603489    4251 logs.go:276] 0 containers: []
	W0920 10:28:12.603504    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:12.603514    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:12.603520    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:12.608000    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:12.608013    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:12.624228    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:12.624240    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:12.636316    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:12.636328    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:12.647686    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:12.647697    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:12.684121    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:12.684130    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:12.698114    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:12.698124    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:12.713029    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:12.713038    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:12.739013    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:12.739023    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:12.764527    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:12.764538    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:12.776703    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:12.776717    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:12.789230    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:12.789241    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:12.805095    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:12.805105    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:12.838886    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:12.838903    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:12.850980    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:12.850996    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:15.375199    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:20.377583    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:20.377694    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:20.389755    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:20.389854    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:20.402030    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:20.402116    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:20.414033    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:20.414121    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:20.426816    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:20.426915    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:20.439101    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:20.439189    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:20.451591    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:20.451681    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:20.463478    4251 logs.go:276] 0 containers: []
	W0920 10:28:20.463491    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:20.463573    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:20.474906    4251 logs.go:276] 0 containers: []
	W0920 10:28:20.474919    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:20.474927    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:20.474933    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:20.493099    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:20.493111    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:20.510054    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:20.510069    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:20.522838    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:20.522856    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:20.527742    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:20.527753    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:20.543781    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:20.543791    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:20.582758    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:20.582772    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:20.594864    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:20.594881    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:20.619670    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:20.619680    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:20.634027    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:20.634036    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:20.653787    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:20.653801    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:20.665754    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:20.665769    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:20.685411    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:20.685426    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:20.696783    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:20.696794    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:20.733224    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:20.733233    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:23.249417    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:28.251474    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:28.251614    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:28.266288    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:28.266375    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:28.276997    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:28.277077    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:28.288632    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:28.288718    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:28.299905    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:28.299997    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:28.311150    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:28.311237    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:28.322115    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:28.322198    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:28.333202    4251 logs.go:276] 0 containers: []
	W0920 10:28:28.333214    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:28.333290    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:28.343729    4251 logs.go:276] 0 containers: []
	W0920 10:28:28.343742    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:28.343751    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:28.343757    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:28.359265    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:28.359281    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:28.375788    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:28.375803    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:28.396756    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:28.396772    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:28.437742    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:28.437758    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:28.462467    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:28.462477    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:28.474619    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:28.474631    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:28.479067    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:28.479072    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:28.491312    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:28.491328    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:28.505555    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:28.505571    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:28.519760    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:28.519772    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:28.537541    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:28.537555    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:28.572475    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:28.572487    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:28.591106    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:28.591123    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:28.602778    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:28.602793    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:31.117131    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:36.119854    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:36.120436    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:36.158977    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:36.159144    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:36.181364    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:36.181483    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:36.197437    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:36.197528    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:36.209998    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:36.210080    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:36.221087    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:36.221159    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:36.231633    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:36.231713    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:36.242409    4251 logs.go:276] 0 containers: []
	W0920 10:28:36.242420    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:36.242487    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:36.253526    4251 logs.go:276] 0 containers: []
	W0920 10:28:36.253538    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:36.253546    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:36.253552    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:36.267271    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:36.267282    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:36.282509    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:36.282519    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:36.306140    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:36.306149    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:36.310723    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:36.310732    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:36.324668    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:36.324682    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:36.339261    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:36.339271    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:36.354563    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:36.354579    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:36.366948    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:36.366959    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:36.379149    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:36.379164    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:36.397257    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:36.397267    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:36.409059    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:36.409072    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:36.448199    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:36.448212    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:36.461234    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:36.461245    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:36.473133    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:36.473146    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:39.008579    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:44.008944    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:44.009070    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:44.021375    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:44.021463    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:44.033390    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:44.033474    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:44.045648    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:44.045742    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:44.056555    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:44.056654    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:44.071760    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:44.071850    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:44.084086    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:44.084177    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:44.095646    4251 logs.go:276] 0 containers: []
	W0920 10:28:44.095658    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:44.095738    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:44.106414    4251 logs.go:276] 0 containers: []
	W0920 10:28:44.106428    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:44.106436    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:44.106443    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:44.144777    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:44.144794    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:44.157933    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:44.157943    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:44.171553    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:44.171566    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:44.186387    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:44.186402    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:44.198085    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:44.198097    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:44.217031    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:44.217044    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:44.241148    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:44.241156    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:44.252276    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:44.252291    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:44.263973    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:44.263990    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:44.268962    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:44.268968    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:44.304630    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:44.304647    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:44.320295    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:44.320304    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:44.335428    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:44.335437    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:44.358135    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:44.358151    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:46.871456    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:51.873756    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:51.874016    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:51.892822    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:51.892930    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:51.906450    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:51.906535    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:51.918304    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:51.918386    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:51.928823    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:51.928903    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:51.939261    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:51.939350    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:51.949807    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:51.949891    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:51.960139    4251 logs.go:276] 0 containers: []
	W0920 10:28:51.960152    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:51.960222    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:51.970707    4251 logs.go:276] 0 containers: []
	W0920 10:28:51.970718    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:51.970725    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:51.970731    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:51.986572    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:51.986588    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:51.998391    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:51.998404    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:52.015612    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:52.015623    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:52.054181    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:52.054190    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:52.068460    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:52.068470    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:52.081820    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:52.081830    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:52.095920    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:52.095929    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:52.107603    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:52.107614    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:52.112512    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:52.112521    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:52.125012    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:52.125022    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:52.148974    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:52.148985    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:52.183719    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:52.183730    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:52.205079    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:52.205089    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:52.219659    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:52.219670    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:54.732610    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:59.733532    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:59.733578    4251 kubeadm.go:597] duration metric: took 4m3.543127083s to restartPrimaryControlPlane
	W0920 10:28:59.733616    4251 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:28:59.733633    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:29:00.631124    4251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:29:00.636097    4251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:29:00.639061    4251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:29:00.641581    4251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:29:00.641587    4251 kubeadm.go:157] found existing configuration files:
	
	I0920 10:29:00.641612    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf
	I0920 10:29:00.644607    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:29:00.644633    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:29:00.647664    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf
	I0920 10:29:00.650240    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:29:00.650265    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:29:00.652930    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf
	I0920 10:29:00.656047    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:29:00.656073    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:29:00.658924    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf
	I0920 10:29:00.661372    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:29:00.661397    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:29:00.664465    4251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:29:00.683388    4251 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:29:00.683426    4251 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:29:00.732445    4251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:29:00.732544    4251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:29:00.732635    4251 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:29:00.786315    4251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:29:00.789449    4251 out.go:235]   - Generating certificates and keys ...
	I0920 10:29:00.789490    4251 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:29:00.789532    4251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:29:00.789595    4251 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:29:00.789627    4251 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:29:00.789666    4251 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:29:00.789701    4251 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:29:00.789732    4251 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:29:00.789762    4251 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:29:00.789807    4251 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:29:00.789839    4251 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:29:00.789861    4251 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:29:00.789890    4251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:29:00.904030    4251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:29:01.030656    4251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:29:01.118868    4251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:29:01.255636    4251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:29:01.284735    4251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:29:01.285127    4251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:29:01.285190    4251 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:29:01.371641    4251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:29:01.376097    4251 out.go:235]   - Booting up control plane ...
	I0920 10:29:01.376151    4251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:29:01.376197    4251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:29:01.376237    4251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:29:01.376285    4251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:29:01.376371    4251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:29:05.880673    4251 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.506782 seconds
	I0920 10:29:05.880740    4251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:29:05.885362    4251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:29:06.403913    4251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:29:06.404185    4251 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-444000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:29:06.908578    4251 kubeadm.go:310] [bootstrap-token] Using token: a87zcg.33g53o7cj2747u9s
	I0920 10:29:06.914760    4251 out.go:235]   - Configuring RBAC rules ...
	I0920 10:29:06.914835    4251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:29:06.914900    4251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:29:06.920322    4251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:29:06.921257    4251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:29:06.922122    4251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:29:06.923056    4251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:29:06.928881    4251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:29:07.090396    4251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:29:07.313109    4251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:29:07.313505    4251 kubeadm.go:310] 
	I0920 10:29:07.313537    4251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:29:07.313543    4251 kubeadm.go:310] 
	I0920 10:29:07.313666    4251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:29:07.313674    4251 kubeadm.go:310] 
	I0920 10:29:07.313686    4251 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:29:07.313728    4251 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:29:07.313791    4251 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:29:07.313800    4251 kubeadm.go:310] 
	I0920 10:29:07.313841    4251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:29:07.313853    4251 kubeadm.go:310] 
	I0920 10:29:07.313882    4251 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:29:07.313887    4251 kubeadm.go:310] 
	I0920 10:29:07.313920    4251 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:29:07.313958    4251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:29:07.314012    4251 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:29:07.314016    4251 kubeadm.go:310] 
	I0920 10:29:07.314100    4251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:29:07.314151    4251 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:29:07.314157    4251 kubeadm.go:310] 
	I0920 10:29:07.314203    4251 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a87zcg.33g53o7cj2747u9s \
	I0920 10:29:07.314280    4251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a \
	I0920 10:29:07.314302    4251 kubeadm.go:310] 	--control-plane 
	I0920 10:29:07.314311    4251 kubeadm.go:310] 
	I0920 10:29:07.314367    4251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:29:07.314375    4251 kubeadm.go:310] 
	I0920 10:29:07.314414    4251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a87zcg.33g53o7cj2747u9s \
	I0920 10:29:07.314468    4251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a 
	I0920 10:29:07.314540    4251 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:29:07.314550    4251 cni.go:84] Creating CNI manager for ""
	I0920 10:29:07.314558    4251 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:29:07.320212    4251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:29:07.330152    4251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:29:07.333056    4251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:29:07.337746    4251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:29:07.337793    4251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:29:07.337844    4251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-444000 minikube.k8s.io/updated_at=2024_09_20T10_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=running-upgrade-444000 minikube.k8s.io/primary=true
	I0920 10:29:07.340821    4251 ops.go:34] apiserver oom_adj: -16
	I0920 10:29:07.383552    4251 kubeadm.go:1113] duration metric: took 45.799709ms to wait for elevateKubeSystemPrivileges
	I0920 10:29:07.383664    4251 kubeadm.go:394] duration metric: took 4m11.20667775s to StartCluster
	I0920 10:29:07.383677    4251 settings.go:142] acquiring lock: {Name:mkc8690df96bb5b3a10e10e028bcb5cdae886c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:29:07.383774    4251 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:29:07.384184    4251 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:29:07.384389    4251 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:29:07.384398    4251 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:29:07.384432    4251 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-444000"
	I0920 10:29:07.384440    4251 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-444000"
	I0920 10:29:07.384441    4251 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-444000"
	I0920 10:29:07.384450    4251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-444000"
	W0920 10:29:07.384458    4251 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:29:07.384471    4251 config.go:182] Loaded profile config "running-upgrade-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:29:07.384472    4251 host.go:66] Checking if "running-upgrade-444000" exists ...
	I0920 10:29:07.385373    4251 kapi.go:59] client config for running-upgrade-444000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/client.key", CAFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1028ea030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:29:07.385499    4251 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-444000"
	W0920 10:29:07.385504    4251 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:29:07.385511    4251 host.go:66] Checking if "running-upgrade-444000" exists ...
	I0920 10:29:07.387271    4251 out.go:177] * Verifying Kubernetes components...
	I0920 10:29:07.387594    4251 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:29:07.391329    4251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:29:07.391335    4251 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/running-upgrade-444000/id_rsa Username:docker}
	I0920 10:29:07.395144    4251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:29:07.399150    4251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:29:07.403227    4251 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:29:07.403234    4251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:29:07.403241    4251 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/running-upgrade-444000/id_rsa Username:docker}
	I0920 10:29:07.494401    4251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:29:07.499278    4251 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:29:07.499318    4251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:29:07.503201    4251 api_server.go:72] duration metric: took 118.804333ms to wait for apiserver process to appear ...
	I0920 10:29:07.503210    4251 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:29:07.503217    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:07.553133    4251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:29:07.580972    4251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:29:07.888798    4251 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:29:07.888810    4251 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:29:12.505211    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:12.505271    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:17.505600    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:17.505634    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:22.506097    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:22.506141    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:27.506731    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:27.506789    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:32.507950    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:32.508009    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:37.509123    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:37.509169    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:29:37.890375    4251 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:29:37.896032    4251 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:29:37.903889    4251 addons.go:510] duration metric: took 30.520335917s for enable addons: enabled=[storage-provisioner]
	I0920 10:29:42.509903    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:42.509965    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:47.511205    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:47.511249    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:52.513263    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:52.513306    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:57.515433    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:57.515454    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:02.517535    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:02.517597    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:07.519795    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:07.519989    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:07.546420    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:30:07.546513    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:07.558006    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:30:07.558093    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:07.568523    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:30:07.568607    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:07.579403    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:30:07.579484    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:07.589668    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:30:07.589766    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:07.600070    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:30:07.600151    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:07.622330    4251 logs.go:276] 0 containers: []
	W0920 10:30:07.622340    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:07.622408    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:07.632849    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:30:07.632864    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:30:07.632869    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:30:07.647400    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:30:07.647411    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:30:07.664980    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:30:07.664990    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:30:07.676093    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:30:07.676108    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:30:07.687428    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:30:07.687437    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:30:07.699036    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:30:07.699051    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:30:07.711165    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:07.711174    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:30:07.745238    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:07.745334    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:07.746480    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:07.746484    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:07.750651    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:07.750657    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:07.783631    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:30:07.783647    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:30:07.798318    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:30:07.798331    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:30:07.812974    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:07.812985    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:07.836052    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:30:07.836059    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:07.848683    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:07.848706    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:30:07.848731    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:30:07.848735    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:07.848739    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:07.848743    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:07.848746    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:30:17.852674    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:22.855072    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:22.855562    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:22.894990    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:30:22.895156    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:22.916660    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:30:22.916783    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:22.935084    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:30:22.935179    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:22.947251    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:30:22.947323    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:22.957750    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:30:22.957826    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:22.968469    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:30:22.968546    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:22.978255    4251 logs.go:276] 0 containers: []
	W0920 10:30:22.978267    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:22.978336    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:22.988743    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:30:22.988756    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:22.988762    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:30:23.024500    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:23.024599    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:23.025818    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:30:23.025852    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:30:23.039850    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:30:23.039861    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:30:23.051807    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:30:23.051818    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:30:23.071265    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:30:23.071280    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:30:23.088395    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:30:23.088406    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:30:23.100025    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:23.100035    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:23.123854    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:30:23.123865    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:23.135434    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:23.135443    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:23.140147    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:23.140155    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:23.181220    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:30:23.181233    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:30:23.196473    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:30:23.196495    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:30:23.213507    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:30:23.213526    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:30:23.228216    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:23.228230    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:30:23.228255    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:30:23.228259    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:23.228262    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:23.228266    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:23.228268    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:30:33.232188    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:38.234469    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:38.234907    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:38.267444    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:30:38.267615    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:38.286649    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:30:38.286759    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:38.300664    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:30:38.300799    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:38.316714    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:30:38.316798    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:38.327388    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:30:38.327486    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:38.338176    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:30:38.338260    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:38.348147    4251 logs.go:276] 0 containers: []
	W0920 10:30:38.348158    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:38.348228    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:38.361773    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:30:38.361789    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:38.361795    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:38.385594    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:30:38.385606    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:38.396795    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:38.396807    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:30:38.429456    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:38.429553    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:38.430692    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:38.430697    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:38.435247    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:30:38.435256    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:30:38.448439    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:30:38.448450    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:30:38.460332    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:30:38.460342    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:30:38.472505    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:30:38.472519    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:30:38.487458    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:30:38.487468    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:30:38.499531    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:30:38.499543    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:30:38.517337    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:38.517347    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:38.553315    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:30:38.553329    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:30:38.571629    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:30:38.571641    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:30:38.585692    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:38.585705    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:30:38.585731    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:30:38.585735    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:38.585738    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:38.585742    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:38.585745    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:30:48.587648    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:53.589795    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:53.590079    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:53.617131    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:30:53.617250    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:53.633156    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:30:53.633252    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:53.645477    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:30:53.645559    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:53.655868    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:30:53.655953    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:53.669371    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:30:53.669459    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:53.679650    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:30:53.679727    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:53.689810    4251 logs.go:276] 0 containers: []
	W0920 10:30:53.689823    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:53.689895    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:53.703125    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:30:53.703140    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:53.703146    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:53.708351    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:30:53.708360    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:30:53.722748    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:30:53.722757    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:30:53.734901    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:30:53.734912    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:30:53.749988    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:30:53.750000    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:53.761426    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:53.761438    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:30:53.795565    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:53.795662    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:53.796882    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:30:53.796889    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:30:53.810757    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:30:53.810767    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:30:53.822775    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:30:53.822786    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:30:53.834436    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:30:53.834450    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:30:53.851288    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:30:53.851298    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:30:53.862648    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:53.862657    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:53.887231    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:53.887238    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:53.931235    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:53.931246    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:30:53.931275    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:30:53.931280    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:53.931284    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:53.931288    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:53.931291    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:31:03.935181    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:08.937380    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:08.937500    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:08.952122    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:31:08.952205    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:08.963155    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:31:08.963241    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:08.974190    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:31:08.974268    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:08.985011    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:31:08.985098    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:08.995235    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:31:08.995322    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:09.006079    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:31:09.006161    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:09.016047    4251 logs.go:276] 0 containers: []
	W0920 10:31:09.016058    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:09.016128    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:09.037870    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:31:09.037885    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:31:09.037891    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:31:09.049784    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:09.049795    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:31:09.084387    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:09.084484    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:09.085669    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:09.085674    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:09.089813    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:31:09.089822    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:31:09.105493    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:31:09.105507    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:31:09.116950    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:31:09.116961    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:31:09.128746    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:31:09.128759    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:31:09.146171    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:09.146185    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:09.183615    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:31:09.183626    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:31:09.198022    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:31:09.198034    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:31:09.209537    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:31:09.209547    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:31:09.224634    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:09.224645    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:09.247982    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:31:09.247992    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:09.259993    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:09.260006    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:31:09.260034    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:31:09.260039    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:09.260042    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:09.260045    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:09.260049    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:31:19.263919    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:24.265991    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:24.266099    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:24.277237    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:31:24.277322    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:24.288540    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:31:24.288628    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:24.300659    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:31:24.300749    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:24.311932    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:31:24.312020    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:24.323508    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:31:24.323588    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:24.335373    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:31:24.335460    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:24.346955    4251 logs.go:276] 0 containers: []
	W0920 10:31:24.346969    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:24.347053    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:24.358601    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:31:24.358616    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:31:24.358622    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:31:24.373486    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:31:24.373498    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:31:24.393143    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:31:24.393159    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:31:24.405072    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:31:24.405086    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:24.417630    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:24.417642    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:24.423178    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:31:24.423188    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:31:24.438957    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:31:24.438965    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:31:24.451988    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:31:24.451997    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:31:24.464324    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:24.464333    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:24.488850    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:24.488862    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:31:24.523456    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:24.523557    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:24.524776    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:31:24.524785    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:31:24.536581    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:24.536592    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:24.578438    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:31:24.578450    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:31:24.597522    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:31:24.597543    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:31:24.609802    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:31:24.609814    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:31:24.629532    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:24.629545    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:31:24.629576    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:31:24.629581    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:24.629584    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:24.629588    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:24.629591    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:31:34.633464    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:39.635584    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:39.635751    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:39.646699    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:31:39.646794    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:39.657459    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:31:39.657536    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:39.668193    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:31:39.668276    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:39.678242    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:31:39.678323    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:39.689205    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:31:39.689317    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:39.699702    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:31:39.699784    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:39.710158    4251 logs.go:276] 0 containers: []
	W0920 10:31:39.710172    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:39.710240    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:39.720604    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:31:39.720621    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:31:39.720628    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:31:39.734943    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:31:39.734953    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:31:39.746657    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:31:39.746668    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:31:39.764927    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:39.764937    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:31:39.798167    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:39.798272    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:39.799491    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:31:39.799498    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:31:39.811552    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:31:39.811563    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:31:39.823772    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:31:39.823788    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:31:39.838380    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:39.838390    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:39.843432    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:39.843438    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:39.881684    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:31:39.881700    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:31:39.895855    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:31:39.895865    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:31:39.907465    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:31:39.907475    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:31:39.919454    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:31:39.919465    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:31:39.936689    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:39.936699    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:39.959694    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:31:39.959702    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:39.971845    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:39.971856    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:31:39.971882    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:31:39.971886    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:39.971889    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:39.971893    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:39.971896    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:31:49.975835    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:54.977973    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:54.978125    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:54.989131    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:31:54.989227    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:55.000860    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:31:55.000949    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:55.012091    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:31:55.012180    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:55.022948    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:31:55.023035    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:55.033445    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:31:55.033525    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:55.045361    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:31:55.045441    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:55.056370    4251 logs.go:276] 0 containers: []
	W0920 10:31:55.056382    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:55.056453    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:55.066659    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:31:55.066677    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:55.066683    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:55.113411    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:31:55.113422    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:31:55.129073    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:31:55.129084    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:31:55.140434    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:31:55.140445    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:31:55.159000    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:31:55.159012    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:31:55.174177    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:31:55.174187    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:31:55.193646    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:31:55.193659    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:31:55.205585    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:31:55.205599    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:31:55.217376    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:31:55.217389    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:31:55.228823    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:55.228834    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:55.252143    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:31:55.252154    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:55.264596    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:55.264609    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:31:55.299984    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:55.300088    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:55.301315    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:55.301328    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:55.306747    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:31:55.306757    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:31:55.318924    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:31:55.318937    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:31:55.331566    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:55.331581    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:31:55.331610    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:31:55.331616    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:55.331620    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:55.331624    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:55.331651    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:05.335495    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:10.337614    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:10.337769    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:10.353849    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:32:10.353944    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:10.366442    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:32:10.366536    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:10.379829    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:32:10.379910    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:10.390701    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:32:10.390785    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:10.401467    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:32:10.401561    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:10.414226    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:32:10.414312    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:10.424880    4251 logs.go:276] 0 containers: []
	W0920 10:32:10.424890    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:10.424960    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:10.434828    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:32:10.434842    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:32:10.434847    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:10.446272    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:32:10.446287    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:32:10.460632    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:32:10.460647    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:32:10.474279    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:32:10.474295    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:32:10.485707    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:32:10.485720    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:32:10.501550    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:10.501561    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:10.526484    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:10.526492    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:32:10.559604    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:10.559703    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:10.560877    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:10.560884    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:10.595894    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:32:10.595905    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:32:10.612502    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:32:10.612511    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:32:10.630614    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:10.630623    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:10.634897    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:32:10.634903    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:32:10.646648    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:32:10.646658    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:32:10.659194    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:32:10.659232    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:32:10.676656    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:32:10.676666    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:32:10.691457    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:10.691465    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:32:10.691490    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:32:10.691495    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:10.691498    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:10.691501    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:10.691504    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:20.695375    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:25.697134    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:25.697281    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:25.711697    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:32:25.711804    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:25.723940    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:32:25.724027    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:25.734294    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:32:25.734379    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:25.744772    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:32:25.744857    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:25.755529    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:32:25.755614    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:25.766527    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:32:25.766614    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:25.776744    4251 logs.go:276] 0 containers: []
	W0920 10:32:25.776757    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:25.776827    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:25.787447    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:32:25.787467    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:32:25.787472    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:32:25.799789    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:32:25.799801    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:32:25.812127    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:32:25.812139    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:32:25.825339    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:32:25.825352    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:25.837876    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:25.837889    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:25.842319    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:25.842325    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:25.876953    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:32:25.876965    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:32:25.894155    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:32:25.894167    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:32:25.908536    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:32:25.908545    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:32:25.927590    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:25.927602    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:25.952014    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:25.952022    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:32:25.986214    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:25.986311    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:25.987496    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:32:25.987501    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:32:26.000113    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:32:26.000124    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:32:26.015768    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:32:26.015779    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:32:26.033230    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:32:26.033244    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:32:26.045525    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:26.045539    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:32:26.045566    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:32:26.045570    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:26.045575    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:26.045578    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:26.045580    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:36.143516    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:41.145602    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:41.145775    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:41.165680    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:32:41.165788    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:41.182716    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:32:41.182801    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:41.194643    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:32:41.194737    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:41.206240    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:32:41.206319    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:41.221630    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:32:41.221720    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:41.232409    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:32:41.232488    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:41.243956    4251 logs.go:276] 0 containers: []
	W0920 10:32:41.243968    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:41.244032    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:41.254548    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:32:41.254568    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:32:41.254572    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:32:41.266402    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:32:41.266414    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:41.278235    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:32:41.278250    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:32:41.298616    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:32:41.298630    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:32:41.311521    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:32:41.311533    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:32:41.333326    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:32:41.333335    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:32:41.344999    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:32:41.345013    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:32:41.359047    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:32:41.359061    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:32:41.373924    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:32:41.373937    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:32:41.392018    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:41.392028    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:32:41.426171    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:41.426267    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:41.427407    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:41.427412    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:41.432158    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:41.432165    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:41.465813    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:32:41.465823    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:32:41.477533    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:32:41.477546    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:32:41.488922    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:41.488932    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:41.513324    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:41.513333    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:32:41.513357    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:32:41.513362    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:41.513365    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:41.513368    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:41.513371    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:51.517451    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:56.519712    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:56.519899    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:56.531491    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:32:56.531563    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:56.541570    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:32:56.541658    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:56.552771    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:32:56.552860    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:56.563403    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:32:56.563487    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:56.573519    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:32:56.573597    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:56.584154    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:32:56.584229    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:56.594657    4251 logs.go:276] 0 containers: []
	W0920 10:32:56.594673    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:56.594737    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:56.605509    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:32:56.605530    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:56.605536    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:32:56.640505    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:56.640604    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:56.641819    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:32:56.641827    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:32:56.662086    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:32:56.662098    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:32:56.682302    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:32:56.682314    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:32:56.704214    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:56.704225    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:56.708815    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:56.708822    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:56.742862    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:32:56.742873    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:32:56.757524    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:32:56.757538    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:32:56.769894    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:32:56.769905    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:32:56.785140    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:56.785151    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:56.809462    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:32:56.809470    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:32:56.823201    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:32:56.823210    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:32:56.834290    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:32:56.834304    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:32:56.850288    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:32:56.850299    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:32:56.862028    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:32:56.862042    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:56.874035    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:56.874049    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:32:56.874075    4251 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 10:32:56.874080    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:56.874085    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	  Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:56.874088    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:56.874095    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:33:06.878191    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:11.880615    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:11.884880    4251 out.go:201] 
	W0920 10:33:11.888871    4251 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 10:33:11.888878    4251 out.go:270] * 
	* 
	W0920 10:33:11.889285    4251 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:33:11.899758    4251 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-444000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-20 10:33:11.984234 -0700 PDT m=+2994.202783751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-444000 -n running-upgrade-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-444000 -n running-upgrade-444000: exit status 2 (15.806899583s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-444000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-173000          | force-systemd-flag-173000 | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-928000              | force-systemd-env-928000  | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-928000           | force-systemd-env-928000  | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT | 20 Sep 24 10:23 PDT |
	| start   | -p docker-flags-076000                | docker-flags-076000       | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-173000             | force-systemd-flag-173000 | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-173000          | force-systemd-flag-173000 | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT | 20 Sep 24 10:23 PDT |
	| start   | -p cert-expiration-355000             | cert-expiration-355000    | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-076000 ssh               | docker-flags-076000       | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-076000 ssh               | docker-flags-076000       | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-076000                | docker-flags-076000       | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT | 20 Sep 24 10:23 PDT |
	| start   | -p cert-options-488000                | cert-options-488000       | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-488000 ssh               | cert-options-488000       | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-488000 -- sudo        | cert-options-488000       | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-488000                | cert-options-488000       | jenkins | v1.34.0 | 20 Sep 24 10:23 PDT | 20 Sep 24 10:23 PDT |
	| start   | -p running-upgrade-444000             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:23 PDT | 20 Sep 24 10:24 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-444000             | running-upgrade-444000    | jenkins | v1.34.0 | 20 Sep 24 10:24 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-355000             | cert-expiration-355000    | jenkins | v1.34.0 | 20 Sep 24 10:26 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-355000             | cert-expiration-355000    | jenkins | v1.34.0 | 20 Sep 24 10:26 PDT | 20 Sep 24 10:26 PDT |
	| start   | -p kubernetes-upgrade-142000          | kubernetes-upgrade-142000 | jenkins | v1.34.0 | 20 Sep 24 10:26 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-142000          | kubernetes-upgrade-142000 | jenkins | v1.34.0 | 20 Sep 24 10:26 PDT | 20 Sep 24 10:26 PDT |
	| start   | -p kubernetes-upgrade-142000          | kubernetes-upgrade-142000 | jenkins | v1.34.0 | 20 Sep 24 10:26 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-142000          | kubernetes-upgrade-142000 | jenkins | v1.34.0 | 20 Sep 24 10:27 PDT | 20 Sep 24 10:27 PDT |
	| start   | -p stopped-upgrade-593000             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:27 PDT | 20 Sep 24 10:27 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-593000 stop           | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:27 PDT | 20 Sep 24 10:27 PDT |
	| start   | -p stopped-upgrade-593000             | stopped-upgrade-593000    | jenkins | v1.34.0 | 20 Sep 24 10:27 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:27:55
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:27:55.469599    4398 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:27:55.469766    4398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:27:55.469771    4398 out.go:358] Setting ErrFile to fd 2...
	I0920 10:27:55.469774    4398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:27:55.469950    4398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:27:55.471110    4398 out.go:352] Setting JSON to false
	I0920 10:27:55.491331    4398 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3438,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:27:55.491403    4398 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:27:55.497059    4398 out.go:177] * [stopped-upgrade-593000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:27:55.504910    4398 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:27:55.504960    4398 notify.go:220] Checking for updates...
	I0920 10:27:55.513038    4398 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:27:55.516067    4398 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:27:55.519025    4398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:27:55.522064    4398 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:27:55.524988    4398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:27:55.528344    4398 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:27:55.532031    4398 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:27:55.533446    4398 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:27:55.538012    4398 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:27:55.544889    4398 start.go:297] selected driver: qemu2
	I0920 10:27:55.544895    4398 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50520 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:27:55.544942    4398 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:27:55.547660    4398 cni.go:84] Creating CNI manager for ""
	I0920 10:27:55.547691    4398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:27:55.547716    4398 start.go:340] cluster config:
	{Name:stopped-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50520 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:27:55.547763    4398 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:27:55.556022    4398 out.go:177] * Starting "stopped-upgrade-593000" primary control-plane node in "stopped-upgrade-593000" cluster
	I0920 10:27:55.560016    4398 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:27:55.560046    4398 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:27:55.560055    4398 cache.go:56] Caching tarball of preloaded images
	I0920 10:27:55.560148    4398 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:27:55.560154    4398 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:27:55.560219    4398 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/config.json ...
	I0920 10:27:55.560708    4398 start.go:360] acquireMachinesLock for stopped-upgrade-593000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:27:55.560744    4398 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "stopped-upgrade-593000"
	I0920 10:27:55.560754    4398 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:27:55.560758    4398 fix.go:54] fixHost starting: 
	I0920 10:27:55.560873    4398 fix.go:112] recreateIfNeeded on stopped-upgrade-593000: state=Stopped err=<nil>
	W0920 10:27:55.560882    4398 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:27:55.564031    4398 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-593000" ...
	I0920 10:27:56.741733    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:27:56.742388    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:27:56.782905    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:27:56.783090    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:27:56.804100    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:27:56.804216    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:27:56.819025    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:27:56.819111    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:27:56.831481    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:27:56.831563    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:27:56.843288    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:27:56.843363    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:27:56.853705    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:27:56.853781    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:27:56.863991    4251 logs.go:276] 0 containers: []
	W0920 10:27:56.864004    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:27:56.864073    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:27:56.877138    4251 logs.go:276] 0 containers: []
	W0920 10:27:56.877151    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:27:56.877159    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:27:56.877165    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:27:56.888595    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:27:56.888608    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:27:56.900366    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:27:56.900380    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:27:56.939570    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:27:56.939580    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:27:56.943948    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:27:56.943955    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:27:56.955337    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:27:56.955350    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:27:56.966780    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:27:56.966791    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:27:56.986791    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:27:56.986803    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:27:56.999003    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:27:56.999012    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:27:57.012440    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:27:57.012451    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:27:57.026624    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:27:57.026633    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:27:57.038773    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:27:57.038788    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:27:57.062657    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:27:57.062667    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:27:57.096685    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:27:57.096697    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:27:57.111906    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:27:57.111916    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:27:59.632324    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:27:55.572016    4398 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:27:55.572090    4398 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50485-:22,hostfwd=tcp::50486-:2376,hostname=stopped-upgrade-593000 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/disk.qcow2
	I0920 10:27:55.620686    4398 main.go:141] libmachine: STDOUT: 
	I0920 10:27:55.620718    4398 main.go:141] libmachine: STDERR: 
	I0920 10:27:55.620731    4398 main.go:141] libmachine: Waiting for VM to start (ssh -p 50485 docker@127.0.0.1)...
	I0920 10:28:04.634516    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:04.634747    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:04.647132    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:04.647231    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:04.657900    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:04.657990    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:04.668511    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:04.668591    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:04.679635    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:04.679712    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:04.690066    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:04.690153    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:04.701187    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:04.701291    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:04.713798    4251 logs.go:276] 0 containers: []
	W0920 10:28:04.713809    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:04.713882    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:04.723986    4251 logs.go:276] 0 containers: []
	W0920 10:28:04.723997    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:04.724005    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:04.724010    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:04.759281    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:04.759292    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:04.774360    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:04.774373    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:04.787481    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:04.787491    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:04.799529    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:04.799540    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:04.813349    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:04.813358    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:04.829049    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:04.829059    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:04.841231    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:04.841242    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:04.865130    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:04.865138    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:04.869756    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:04.869766    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:04.881417    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:04.881428    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:04.899175    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:04.899188    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:04.912689    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:04.912700    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:04.951961    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:04.951972    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:04.966147    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:04.966161    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:07.478851    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:12.481047    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:12.481490    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:12.513886    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:12.514045    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:12.533590    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:12.533689    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:12.548066    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:12.548155    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:12.559841    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:12.559926    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:12.572630    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:12.572711    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:12.583149    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:12.583228    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:12.593301    4251 logs.go:276] 0 containers: []
	W0920 10:28:12.593312    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:12.593374    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:12.603489    4251 logs.go:276] 0 containers: []
	W0920 10:28:12.603504    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:12.603514    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:12.603520    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:12.608000    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:12.608013    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:12.624228    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:12.624240    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:12.636316    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:12.636328    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:12.647686    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:12.647697    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:12.684121    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:12.684130    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:12.698114    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:12.698124    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:12.713029    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:12.713038    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:12.739013    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:12.739023    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:12.764527    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:12.764538    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:12.776703    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:12.776717    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:12.789230    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:12.789241    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:12.805095    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:12.805105    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:12.838886    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:12.838903    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:12.850980    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:12.850996    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:15.387373    4398 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/config.json ...
	I0920 10:28:15.388100    4398 machine.go:93] provisionDockerMachine start ...
	I0920 10:28:15.388250    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.388586    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.388601    4398 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:28:15.480009    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 10:28:15.480042    4398 buildroot.go:166] provisioning hostname "stopped-upgrade-593000"
	I0920 10:28:15.480151    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.480372    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.480383    4398 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-593000 && echo "stopped-upgrade-593000" | sudo tee /etc/hostname
	I0920 10:28:15.568909    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-593000
	
	I0920 10:28:15.568999    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.569182    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.569198    4398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-593000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-593000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-593000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:28:15.651198    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:28:15.651213    4398 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19672-1143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19672-1143/.minikube}
	I0920 10:28:15.651230    4398 buildroot.go:174] setting up certificates
	I0920 10:28:15.651236    4398 provision.go:84] configureAuth start
	I0920 10:28:15.651241    4398 provision.go:143] copyHostCerts
	I0920 10:28:15.651339    4398 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem, removing ...
	I0920 10:28:15.651347    4398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem
	I0920 10:28:15.651873    4398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem (1078 bytes)
	I0920 10:28:15.652100    4398 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem, removing ...
	I0920 10:28:15.652104    4398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem
	I0920 10:28:15.652165    4398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem (1123 bytes)
	I0920 10:28:15.652306    4398 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem, removing ...
	I0920 10:28:15.652310    4398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem
	I0920 10:28:15.652362    4398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem (1679 bytes)
	I0920 10:28:15.652464    4398 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-593000 san=[127.0.0.1 localhost minikube stopped-upgrade-593000]
	I0920 10:28:15.768179    4398 provision.go:177] copyRemoteCerts
	I0920 10:28:15.768223    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:28:15.768233    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:28:15.807043    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:28:15.814426    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:28:15.821675    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 10:28:15.828554    4398 provision.go:87] duration metric: took 177.317291ms to configureAuth
	I0920 10:28:15.828563    4398 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:28:15.828681    4398 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:28:15.828725    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.828817    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.828822    4398 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:28:15.899822    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:28:15.899832    4398 buildroot.go:70] root file system type: tmpfs
	I0920 10:28:15.899884    4398 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:28:15.899934    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.900069    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.900102    4398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:28:15.977063    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:28:15.977135    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.977245    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.977255    4398 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:28:16.322021    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 10:28:16.322034    4398 machine.go:96] duration metric: took 933.949292ms to provisionDockerMachine
	I0920 10:28:16.322040    4398 start.go:293] postStartSetup for "stopped-upgrade-593000" (driver="qemu2")
	I0920 10:28:16.322047    4398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:28:16.322103    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:28:16.322114    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:28:16.361166    4398 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:28:16.362431    4398 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:28:16.362439    4398 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/addons for local assets ...
	I0920 10:28:16.362522    4398 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/files for local assets ...
	I0920 10:28:16.362865    4398 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0920 10:28:16.363000    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:28:16.365852    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0920 10:28:16.372420    4398 start.go:296] duration metric: took 50.375833ms for postStartSetup
	I0920 10:28:16.372434    4398 fix.go:56] duration metric: took 20.812254167s for fixHost
	I0920 10:28:16.372473    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:16.372577    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:16.372582    4398 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:28:16.446827    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726853296.067055879
	
	I0920 10:28:16.446836    4398 fix.go:216] guest clock: 1726853296.067055879
	I0920 10:28:16.446841    4398 fix.go:229] Guest: 2024-09-20 10:28:16.067055879 -0700 PDT Remote: 2024-09-20 10:28:16.372436 -0700 PDT m=+20.934048501 (delta=-305.380121ms)
	I0920 10:28:16.446852    4398 fix.go:200] guest clock delta is within tolerance: -305.380121ms
	I0920 10:28:16.446855    4398 start.go:83] releasing machines lock for "stopped-upgrade-593000", held for 20.88668675s
	I0920 10:28:16.446933    4398 ssh_runner.go:195] Run: cat /version.json
	I0920 10:28:16.446946    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:28:16.446933    4398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:28:16.446995    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	W0920 10:28:16.447513    4398 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50485: connect: connection refused
	I0920 10:28:16.447534    4398 retry.go:31] will retry after 350.539855ms: dial tcp [::1]:50485: connect: connection refused
	W0920 10:28:16.483662    4398 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:28:16.483712    4398 ssh_runner.go:195] Run: systemctl --version
	I0920 10:28:16.485799    4398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:28:16.487364    4398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:28:16.487395    4398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:28:16.490414    4398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:28:16.494872    4398 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:28:16.494880    4398 start.go:495] detecting cgroup driver to use...
	I0920 10:28:16.494961    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:28:16.502392    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:28:16.505536    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:28:16.508434    4398 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:28:16.508461    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:28:16.511571    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:28:16.514983    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:28:16.518518    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:28:16.521457    4398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:28:16.524207    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:28:16.527491    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:28:16.536119    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:28:16.540917    4398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:28:16.543970    4398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:28:16.546924    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:16.623484    4398 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:28:16.634072    4398 start.go:495] detecting cgroup driver to use...
	I0920 10:28:16.634149    4398 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:28:16.640947    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:28:16.645838    4398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:28:16.654308    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:28:16.659097    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:28:16.663738    4398 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 10:28:16.703527    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:28:16.708546    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:28:16.713985    4398 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:28:16.715217    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:28:16.717804    4398 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:28:16.722613    4398 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:28:16.786761    4398 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:28:16.866407    4398 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:28:16.866476    4398 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:28:16.871698    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:16.932909    4398 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:28:18.083256    4398 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15035225s)
	I0920 10:28:18.083336    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:28:18.087800    4398 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:28:18.093754    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:28:18.098470    4398 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:28:18.162392    4398 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:28:18.226550    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:18.290557    4398 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:28:18.296692    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:28:18.301440    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:18.367070    4398 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:28:18.407286    4398 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:28:18.407385    4398 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:28:18.410459    4398 start.go:563] Will wait 60s for crictl version
	I0920 10:28:18.410525    4398 ssh_runner.go:195] Run: which crictl
	I0920 10:28:18.411939    4398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:28:18.426258    4398 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:28:18.426334    4398 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:28:18.444505    4398 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:28:15.375199    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:18.461638    4398 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:28:18.461778    4398 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:28:18.463096    4398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:28:18.467204    4398 kubeadm.go:883] updating cluster {Name:stopped-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50520 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:28:18.467248    4398 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:28:18.467293    4398 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:28:18.478842    4398 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:28:18.478851    4398 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:28:18.478912    4398 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:28:18.482263    4398 ssh_runner.go:195] Run: which lz4
	I0920 10:28:18.483552    4398 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:28:18.484888    4398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:28:18.484898    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:28:19.374919    4398 docker.go:649] duration metric: took 891.433791ms to copy over tarball
	I0920 10:28:19.374984    4398 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:28:20.377583    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:20.377694    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:20.389755    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:20.389854    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:20.402030    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:20.402116    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:20.414033    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:20.414121    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:20.426816    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:20.426915    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:20.439101    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:20.439189    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:20.451591    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:20.451681    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:20.463478    4251 logs.go:276] 0 containers: []
	W0920 10:28:20.463491    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:20.463573    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:20.474906    4251 logs.go:276] 0 containers: []
	W0920 10:28:20.474919    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:20.474927    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:20.474933    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:20.493099    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:20.493111    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:20.510054    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:20.510069    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:20.522838    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:20.522856    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:20.527742    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:20.527753    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:20.543781    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:20.543791    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:20.582758    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:20.582772    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:20.594864    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:20.594881    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:20.619670    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:20.619680    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:20.634027    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:20.634036    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:20.653787    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:20.653801    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:20.665754    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:20.665769    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:20.685411    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:20.685426    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:20.696783    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:20.696794    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:20.733224    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:20.733233    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:23.249417    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:20.543103    4398 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168134333s)
	I0920 10:28:20.543117    4398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:28:20.560063    4398 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:28:20.563630    4398 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:28:20.569342    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:20.633082    4398 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:28:22.070107    4398 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.437046958s)
	I0920 10:28:22.070220    4398 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:28:22.080974    4398 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:28:22.080984    4398 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:28:22.080990    4398 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:28:22.086691    4398 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:22.088598    4398 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.089968    4398 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:28:22.089981    4398 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:22.091402    4398 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.091459    4398 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.092901    4398 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:28:22.092917    4398 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:28:22.094167    4398 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.094177    4398 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.095575    4398 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.095591    4398 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:28:22.096646    4398 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.096694    4398 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.097629    4398 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.098322    4398 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.509241    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.526844    4398 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:28:22.526871    4398 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.526934    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.537160    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:28:22.544348    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.544448    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:28:22.549091    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:28:22.562510    4398 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:28:22.562538    4398 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.562612    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.565047    4398 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:28:22.565065    4398 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:28:22.565110    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:28:22.567113    4398 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:28:22.567126    4398 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:28:22.567173    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0920 10:28:22.578070    4398 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:28:22.578231    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.584949    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0920 10:28:22.585002    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:28:22.585135    4398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:28:22.586165    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.589313    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:28:22.589418    4398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:28:22.597820    4398 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:28:22.597836    4398 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:28:22.597842    4398 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.597864    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:28:22.597906    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.602387    4398 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:28:22.602397    4398 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:28:22.602415    4398 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.602428    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0920 10:28:22.602482    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.616925    4398 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:28:22.616939    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0920 10:28:22.617116    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.619844    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:28:22.619975    4398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:28:22.630226    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:28:22.666013    4398 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0920 10:28:22.666062    4398 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:28:22.666066    4398 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:28:22.666083    4398 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.666090    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:28:22.666140    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.695270    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:28:22.780404    4398 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:28:22.780456    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0920 10:28:22.909249    4398 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 10:28:22.936111    4398 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:28:22.936128    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0920 10:28:23.073192    4398 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0920 10:28:23.079184    4398 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:28:23.079302    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:23.089645    4398 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:28:23.089670    4398 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:23.089739    4398 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:23.102932    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:28:23.103070    4398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:28:23.104510    4398 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:28:23.104522    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:28:23.132695    4398 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:28:23.132708    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:28:23.360569    4398 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:28:23.360607    4398 cache_images.go:92] duration metric: took 1.279645458s to LoadCachedImages
	W0920 10:28:23.360644    4398 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0920 10:28:23.360654    4398 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:28:23.360696    4398 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-593000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:28:23.360774    4398 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:28:23.374428    4398 cni.go:84] Creating CNI manager for ""
	I0920 10:28:23.374441    4398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:28:23.374447    4398 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:28:23.374456    4398 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-593000 NodeName:stopped-upgrade-593000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:28:23.374513    4398 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-593000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:28:23.374582    4398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:28:23.378173    4398 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:28:23.378216    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:28:23.380860    4398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:28:23.385631    4398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:28:23.390464    4398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:28:23.395919    4398 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:28:23.397203    4398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:28:23.400387    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:23.467194    4398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:28:23.472954    4398 certs.go:68] Setting up /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000 for IP: 10.0.2.15
	I0920 10:28:23.472969    4398 certs.go:194] generating shared ca certs ...
	I0920 10:28:23.472978    4398 certs.go:226] acquiring lock for ca certs: {Name:mk7151e0388cf18b174fabc4929e6178a41b4c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:28:23.473141    4398 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key
	I0920 10:28:23.473190    4398 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key
	I0920 10:28:23.473196    4398 certs.go:256] generating profile certs ...
	I0920 10:28:23.473254    4398 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.key
	I0920 10:28:23.473273    4398 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key.84cad731
	I0920 10:28:23.473284    4398 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt.84cad731 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:28:23.523351    4398 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt.84cad731 ...
	I0920 10:28:23.523367    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt.84cad731: {Name:mk33e1c515dcd1dcd2322b493212597c9529e282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:28:23.524004    4398 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key.84cad731 ...
	I0920 10:28:23.524014    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key.84cad731: {Name:mkaa29a25453276623c6265144807dca9cb38e64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:28:23.524164    4398 certs.go:381] copying /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt.84cad731 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt
	I0920 10:28:23.524306    4398 certs.go:385] copying /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key.84cad731 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key
	I0920 10:28:23.524470    4398 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/proxy-client.key
	I0920 10:28:23.524610    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679.pem (1338 bytes)
	W0920 10:28:23.524642    4398 certs.go:480] ignoring /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0920 10:28:23.524646    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 10:28:23.524668    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:28:23.524692    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:28:23.524712    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem (1679 bytes)
	I0920 10:28:23.524749    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0920 10:28:23.525068    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:28:23.531795    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 10:28:23.539324    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:28:23.546829    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 10:28:23.553988    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:28:23.560909    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 10:28:23.567687    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:28:23.575185    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 10:28:23.582702    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:28:23.589832    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0920 10:28:23.596634    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0920 10:28:23.603298    4398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:28:23.609449    4398 ssh_runner.go:195] Run: openssl version
	I0920 10:28:23.611268    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0920 10:28:23.614818    4398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0920 10:28:23.616170    4398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 16:59 /usr/share/ca-certificates/16792.pem
	I0920 10:28:23.616194    4398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0920 10:28:23.618044    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:28:23.620942    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:28:23.623891    4398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:28:23.625427    4398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:28:23.625448    4398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:28:23.627108    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:28:23.630557    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0920 10:28:23.633522    4398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0920 10:28:23.634808    4398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 16:59 /usr/share/ca-certificates/1679.pem
	I0920 10:28:23.634827    4398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0920 10:28:23.636607    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0920 10:28:23.639705    4398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:28:23.641173    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:28:23.642954    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:28:23.644810    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:28:23.646712    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:28:23.648648    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:28:23.650408    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:28:23.652242    4398 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50520 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:28:23.652308    4398 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:28:23.663074    4398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:28:23.666885    4398 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:28:23.666896    4398 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:28:23.666924    4398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:28:23.670060    4398 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:28:23.670359    4398 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-593000" does not appear in /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:28:23.670453    4398 kubeconfig.go:62] /Users/jenkins/minikube-integration/19672-1143/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-593000" cluster setting kubeconfig missing "stopped-upgrade-593000" context setting]
	I0920 10:28:23.670648    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:28:23.671361    4398 kapi.go:59] client config for stopped-upgrade-593000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.key", CAFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102212030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:28:23.671692    4398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:28:23.675007    4398 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-593000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:28:23.675012    4398 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:28:23.675060    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:28:23.685586    4398 docker.go:483] Stopping containers: [9b6d0dc7f9bd 39a78cefa13d 53b2e9135faf 03d7ed98fdba a0db8e235df0 9887b6c9112a 89f47a36713c 425892479a5b]
	I0920 10:28:23.685661    4398 ssh_runner.go:195] Run: docker stop 9b6d0dc7f9bd 39a78cefa13d 53b2e9135faf 03d7ed98fdba a0db8e235df0 9887b6c9112a 89f47a36713c 425892479a5b
	I0920 10:28:23.695906    4398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:28:23.701461    4398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:28:23.704171    4398 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:28:23.704180    4398 kubeadm.go:157] found existing configuration files:
	
	I0920 10:28:23.704212    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/admin.conf
	I0920 10:28:23.706978    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:28:23.707012    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:28:23.710134    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/kubelet.conf
	I0920 10:28:23.712528    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:28:23.712557    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:28:23.715275    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/controller-manager.conf
	I0920 10:28:23.718280    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:28:23.718303    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:28:23.721020    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/scheduler.conf
	I0920 10:28:23.723577    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:28:23.723602    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:28:23.726564    4398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:28:23.729326    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:23.754367    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:24.152587    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:24.269776    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:24.291878    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:24.317235    4398 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:28:24.317325    4398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:28:24.819348    4398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:28:25.318958    4398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:28:25.322867    4398 api_server.go:72] duration metric: took 1.005661917s to wait for apiserver process to appear ...
	I0920 10:28:25.322877    4398 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:28:25.322885    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:28.251474    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:28.251614    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:28.266288    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:28.266375    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:28.276997    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:28.277077    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:28.288632    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:28.288718    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:28.299905    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:28.299997    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:28.311150    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:28.311237    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:28.322115    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:28.322198    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:28.333202    4251 logs.go:276] 0 containers: []
	W0920 10:28:28.333214    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:28.333290    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:28.343729    4251 logs.go:276] 0 containers: []
	W0920 10:28:28.343742    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:28.343751    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:28.343757    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:28.359265    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:28.359281    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:28.375788    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:28.375803    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:28.396756    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:28.396772    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:28.437742    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:28.437758    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:28.462467    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:28.462477    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:28.474619    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:28.474631    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:28.479067    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:28.479072    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:28.491312    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:28.491328    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:28.505555    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:28.505571    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:28.519760    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:28.519772    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:28.537541    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:28.537555    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:28.572475    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:28.572487    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:28.591106    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:28.591123    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:28.602778    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:28.602793    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:30.324866    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:30.324934    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:31.117131    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:35.325562    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:35.325668    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:36.119854    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:36.120436    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:36.158977    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:36.159144    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:36.181364    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:36.181483    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:36.197437    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:36.197528    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:36.209998    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:36.210080    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:36.221087    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:36.221159    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:36.231633    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:36.231713    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:36.242409    4251 logs.go:276] 0 containers: []
	W0920 10:28:36.242420    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:36.242487    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:36.253526    4251 logs.go:276] 0 containers: []
	W0920 10:28:36.253538    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:36.253546    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:36.253552    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:36.267271    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:36.267282    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:36.282509    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:36.282519    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:36.306140    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:36.306149    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:36.310723    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:36.310732    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:36.324668    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:36.324682    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:36.339261    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:36.339271    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:36.354563    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:36.354579    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:36.366948    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:36.366959    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:36.379149    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:36.379164    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:36.397257    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:36.397267    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:36.409059    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:36.409072    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:36.448199    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:36.448212    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:36.461234    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:36.461245    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:36.473133    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:36.473146    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:39.008579    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:40.326570    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:40.326676    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:44.008944    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:44.009070    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:44.021375    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:44.021463    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:44.033390    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:44.033474    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:44.045648    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:44.045742    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:44.056555    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:44.056654    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:44.071760    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:44.071850    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:44.084086    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:44.084177    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:44.095646    4251 logs.go:276] 0 containers: []
	W0920 10:28:44.095658    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:44.095738    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:44.106414    4251 logs.go:276] 0 containers: []
	W0920 10:28:44.106428    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:44.106436    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:44.106443    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:44.144777    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:44.144794    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:44.157933    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:44.157943    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:44.171553    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:44.171566    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:44.186387    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:44.186402    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:44.198085    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:44.198097    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:44.217031    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:44.217044    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:44.241148    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:44.241156    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:44.252276    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:44.252291    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:44.263973    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:44.263990    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:44.268962    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:44.268968    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:44.304630    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:44.304647    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:44.320295    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:44.320304    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:44.335428    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:44.335437    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:44.358135    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:44.358151    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:45.328039    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:45.328085    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:46.871456    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:50.329399    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:50.329501    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:51.873756    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:51.874016    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:28:51.892822    4251 logs.go:276] 2 containers: [46b2cdfb23f1 ff40eb4b128a]
	I0920 10:28:51.892930    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:28:51.906450    4251 logs.go:276] 2 containers: [7dfaaa22cf45 1fdd341b2f16]
	I0920 10:28:51.906535    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:28:51.918304    4251 logs.go:276] 1 containers: [54cba1efc0d2]
	I0920 10:28:51.918386    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:28:51.928823    4251 logs.go:276] 2 containers: [5ee6ccd67525 0a0db24a147d]
	I0920 10:28:51.928903    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:28:51.939261    4251 logs.go:276] 1 containers: [2961ff83031e]
	I0920 10:28:51.939350    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:28:51.949807    4251 logs.go:276] 2 containers: [c56fb1f2431a 0f066600e355]
	I0920 10:28:51.949891    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:28:51.960139    4251 logs.go:276] 0 containers: []
	W0920 10:28:51.960152    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:28:51.960222    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:28:51.970707    4251 logs.go:276] 0 containers: []
	W0920 10:28:51.970718    4251 logs.go:278] No container was found matching "storage-provisioner"
	I0920 10:28:51.970725    4251 logs.go:123] Gathering logs for kube-scheduler [0a0db24a147d] ...
	I0920 10:28:51.970731    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0db24a147d"
	I0920 10:28:51.986572    4251 logs.go:123] Gathering logs for kube-controller-manager [0f066600e355] ...
	I0920 10:28:51.986588    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f066600e355"
	I0920 10:28:51.998391    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:28:51.998404    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:28:52.015612    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:28:52.015623    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:28:52.054181    4251 logs.go:123] Gathering logs for kube-apiserver [46b2cdfb23f1] ...
	I0920 10:28:52.054190    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46b2cdfb23f1"
	I0920 10:28:52.068460    4251 logs.go:123] Gathering logs for etcd [7dfaaa22cf45] ...
	I0920 10:28:52.068470    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dfaaa22cf45"
	I0920 10:28:52.081820    4251 logs.go:123] Gathering logs for etcd [1fdd341b2f16] ...
	I0920 10:28:52.081830    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fdd341b2f16"
	I0920 10:28:52.095920    4251 logs.go:123] Gathering logs for coredns [54cba1efc0d2] ...
	I0920 10:28:52.095929    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54cba1efc0d2"
	I0920 10:28:52.107603    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:28:52.107614    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:28:52.112512    4251 logs.go:123] Gathering logs for kube-apiserver [ff40eb4b128a] ...
	I0920 10:28:52.112521    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff40eb4b128a"
	I0920 10:28:52.125012    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:28:52.125022    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:28:52.148974    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:28:52.148985    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:28:52.183719    4251 logs.go:123] Gathering logs for kube-controller-manager [c56fb1f2431a] ...
	I0920 10:28:52.183730    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c56fb1f2431a"
	I0920 10:28:52.205079    4251 logs.go:123] Gathering logs for kube-scheduler [5ee6ccd67525] ...
	I0920 10:28:52.205089    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee6ccd67525"
	I0920 10:28:52.219659    4251 logs.go:123] Gathering logs for kube-proxy [2961ff83031e] ...
	I0920 10:28:52.219670    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2961ff83031e"
	I0920 10:28:54.732610    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:55.331459    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:55.331503    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:59.733532    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:59.733578    4251 kubeadm.go:597] duration metric: took 4m3.543127083s to restartPrimaryControlPlane
	W0920 10:28:59.733616    4251 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:28:59.733633    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:29:00.333604    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:00.333628    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:00.631124    4251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:29:00.636097    4251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:29:00.639061    4251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:29:00.641581    4251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:29:00.641587    4251 kubeadm.go:157] found existing configuration files:
	
	I0920 10:29:00.641612    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf
	I0920 10:29:00.644607    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:29:00.644633    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:29:00.647664    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf
	I0920 10:29:00.650240    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:29:00.650265    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:29:00.652930    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf
	I0920 10:29:00.656047    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:29:00.656073    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:29:00.658924    4251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf
	I0920 10:29:00.661372    4251 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:29:00.661397    4251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:29:00.664465    4251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:29:00.683388    4251 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:29:00.683426    4251 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:29:00.732445    4251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:29:00.732544    4251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:29:00.732635    4251 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:29:00.786315    4251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:29:00.789449    4251 out.go:235]   - Generating certificates and keys ...
	I0920 10:29:00.789490    4251 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:29:00.789532    4251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:29:00.789595    4251 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:29:00.789627    4251 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:29:00.789666    4251 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:29:00.789701    4251 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:29:00.789732    4251 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:29:00.789762    4251 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:29:00.789807    4251 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:29:00.789839    4251 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:29:00.789861    4251 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:29:00.789890    4251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:29:00.904030    4251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:29:01.030656    4251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:29:01.118868    4251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:29:01.255636    4251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:29:01.284735    4251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:29:01.285127    4251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:29:01.285190    4251 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:29:01.371641    4251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:29:01.376097    4251 out.go:235]   - Booting up control plane ...
	I0920 10:29:01.376151    4251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:29:01.376197    4251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:29:01.376237    4251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:29:01.376285    4251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:29:01.376371    4251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:29:05.335715    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:05.335773    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:05.880673    4251 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.506782 seconds
	I0920 10:29:05.880740    4251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:29:05.885362    4251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:29:06.403913    4251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:29:06.404185    4251 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-444000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:29:06.908578    4251 kubeadm.go:310] [bootstrap-token] Using token: a87zcg.33g53o7cj2747u9s
	I0920 10:29:06.914760    4251 out.go:235]   - Configuring RBAC rules ...
	I0920 10:29:06.914835    4251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:29:06.914900    4251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:29:06.920322    4251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:29:06.921257    4251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:29:06.922122    4251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:29:06.923056    4251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:29:06.928881    4251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:29:07.090396    4251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:29:07.313109    4251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:29:07.313505    4251 kubeadm.go:310] 
	I0920 10:29:07.313537    4251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:29:07.313543    4251 kubeadm.go:310] 
	I0920 10:29:07.313666    4251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:29:07.313674    4251 kubeadm.go:310] 
	I0920 10:29:07.313686    4251 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:29:07.313728    4251 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:29:07.313791    4251 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:29:07.313800    4251 kubeadm.go:310] 
	I0920 10:29:07.313841    4251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:29:07.313853    4251 kubeadm.go:310] 
	I0920 10:29:07.313882    4251 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:29:07.313887    4251 kubeadm.go:310] 
	I0920 10:29:07.313920    4251 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:29:07.313958    4251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:29:07.314012    4251 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:29:07.314016    4251 kubeadm.go:310] 
	I0920 10:29:07.314100    4251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:29:07.314151    4251 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:29:07.314157    4251 kubeadm.go:310] 
	I0920 10:29:07.314203    4251 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a87zcg.33g53o7cj2747u9s \
	I0920 10:29:07.314280    4251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a \
	I0920 10:29:07.314302    4251 kubeadm.go:310] 	--control-plane 
	I0920 10:29:07.314311    4251 kubeadm.go:310] 
	I0920 10:29:07.314367    4251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:29:07.314375    4251 kubeadm.go:310] 
	I0920 10:29:07.314414    4251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a87zcg.33g53o7cj2747u9s \
	I0920 10:29:07.314468    4251 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a 
	I0920 10:29:07.314540    4251 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:29:07.314550    4251 cni.go:84] Creating CNI manager for ""
	I0920 10:29:07.314558    4251 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:29:07.320212    4251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:29:07.330152    4251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:29:07.333056    4251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:29:07.337746    4251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:29:07.337793    4251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:29:07.337844    4251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-444000 minikube.k8s.io/updated_at=2024_09_20T10_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=running-upgrade-444000 minikube.k8s.io/primary=true
	I0920 10:29:07.340821    4251 ops.go:34] apiserver oom_adj: -16
	I0920 10:29:07.383552    4251 kubeadm.go:1113] duration metric: took 45.799709ms to wait for elevateKubeSystemPrivileges
	I0920 10:29:07.383664    4251 kubeadm.go:394] duration metric: took 4m11.20667775s to StartCluster
	I0920 10:29:07.383677    4251 settings.go:142] acquiring lock: {Name:mkc8690df96bb5b3a10e10e028bcb5cdae886c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:29:07.383774    4251 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:29:07.384184    4251 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:29:07.384389    4251 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:29:07.384398    4251 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:29:07.384432    4251 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-444000"
	I0920 10:29:07.384440    4251 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-444000"
	I0920 10:29:07.384441    4251 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-444000"
	I0920 10:29:07.384450    4251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-444000"
	W0920 10:29:07.384458    4251 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:29:07.384471    4251 config.go:182] Loaded profile config "running-upgrade-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:29:07.384472    4251 host.go:66] Checking if "running-upgrade-444000" exists ...
	I0920 10:29:07.385373    4251 kapi.go:59] client config for running-upgrade-444000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/running-upgrade-444000/client.key", CAFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1028ea030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:29:07.385499    4251 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-444000"
	W0920 10:29:07.385504    4251 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:29:07.385511    4251 host.go:66] Checking if "running-upgrade-444000" exists ...
	I0920 10:29:07.387271    4251 out.go:177] * Verifying Kubernetes components...
	I0920 10:29:07.387594    4251 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:29:07.391329    4251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:29:07.391335    4251 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/running-upgrade-444000/id_rsa Username:docker}
	I0920 10:29:07.395144    4251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:29:07.399150    4251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:29:07.403227    4251 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:29:07.403234    4251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:29:07.403241    4251 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/running-upgrade-444000/id_rsa Username:docker}
	I0920 10:29:07.494401    4251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:29:07.499278    4251 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:29:07.499318    4251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:29:07.503201    4251 api_server.go:72] duration metric: took 118.804333ms to wait for apiserver process to appear ...
	I0920 10:29:07.503210    4251 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:29:07.503217    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:07.553133    4251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:29:07.580972    4251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:29:07.888798    4251 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:29:07.888810    4251 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:29:10.338052    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:10.338093    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:12.505211    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:12.505271    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:15.338748    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:15.338799    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:17.505600    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:17.505634    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:20.340986    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:20.341007    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:22.506097    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:22.506141    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:25.343065    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:25.343375    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:25.368738    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:25.368869    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:25.385841    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:25.385947    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:25.403803    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:25.403885    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:25.414654    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:25.414741    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:25.425368    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:25.425450    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:25.435945    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:25.436024    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:25.446196    4398 logs.go:276] 0 containers: []
	W0920 10:29:25.446213    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:25.446294    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:25.457049    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:25.457068    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:25.457074    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:25.461345    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:25.461365    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:27.506731    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:27.506789    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:25.476016    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:25.476026    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:25.515540    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:25.515551    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:25.534356    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:25.534369    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:25.545684    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:25.545695    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:25.557383    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:25.557398    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:25.575606    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:25.575616    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:25.586933    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:25.586948    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:25.598414    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:25.598426    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:25.673908    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:25.673919    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:25.688039    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:25.688050    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:25.701999    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:25.702011    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:25.714343    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:25.714355    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:25.755743    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:25.755755    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:25.766636    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:25.766647    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:25.782711    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:25.782721    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:28.310454    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:32.507950    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:32.508009    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:33.311339    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:33.311466    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:33.322506    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:33.322597    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:33.333260    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:33.333338    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:33.344161    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:33.344245    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:33.354699    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:33.354796    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:33.365649    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:33.365750    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:33.376860    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:33.376946    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:33.386895    4398 logs.go:276] 0 containers: []
	W0920 10:29:33.386907    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:33.386971    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:33.398180    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:33.398201    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:33.398208    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:33.436828    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:33.436843    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:33.441629    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:33.441637    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:33.478010    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:33.478021    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:33.504627    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:33.504642    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:33.516771    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:33.516784    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:33.555406    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:33.555417    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:33.570047    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:33.570057    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:33.585595    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:33.585606    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:33.597599    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:33.597610    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:33.615256    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:33.615271    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:33.629172    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:33.629182    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:33.640694    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:33.640706    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:33.652273    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:33.652284    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:33.665740    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:33.665751    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:33.683229    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:33.683244    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:33.694532    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:33.694543    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:37.509123    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:37.509169    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:29:37.890375    4251 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:29:37.896032    4251 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:29:37.903889    4251 addons.go:510] duration metric: took 30.520335917s for enable addons: enabled=[storage-provisioner]
	I0920 10:29:36.206190    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:42.509903    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:42.509965    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:41.208453    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:41.208638    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:41.221131    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:41.221225    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:41.232343    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:41.232429    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:41.251776    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:41.251885    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:41.262272    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:41.262347    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:41.272564    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:41.272645    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:41.282953    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:41.283035    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:41.295261    4398 logs.go:276] 0 containers: []
	W0920 10:29:41.295271    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:41.295332    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:41.306294    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:41.306311    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:41.306316    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:41.343270    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:41.343279    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:41.357618    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:41.357629    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:41.396162    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:41.396173    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:41.410694    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:41.410710    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:41.430273    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:41.430287    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:41.441565    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:41.441578    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:41.456734    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:41.456745    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:41.468813    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:41.468824    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:41.473333    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:41.473340    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:41.509117    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:41.509129    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:41.520756    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:41.520769    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:41.533169    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:41.533180    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:41.545183    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:41.545195    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:41.558974    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:41.558985    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:41.576798    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:41.576808    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:41.589371    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:41.589382    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:44.116195    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:47.511205    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:47.511249    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:49.116383    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:49.116537    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:49.127742    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:49.127826    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:49.137907    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:49.137998    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:49.148318    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:49.148407    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:49.159385    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:49.159472    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:49.169810    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:49.169886    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:49.180328    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:49.180412    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:49.190157    4398 logs.go:276] 0 containers: []
	W0920 10:29:49.190169    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:49.190231    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:49.200556    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:49.200576    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:49.200581    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:49.214339    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:49.214354    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:49.234215    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:49.234225    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:49.245214    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:49.245228    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:49.260909    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:49.260921    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:49.274744    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:49.274755    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:49.309194    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:49.309210    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:49.323793    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:49.323803    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:49.335003    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:49.335012    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:49.346099    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:49.346110    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:49.358415    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:49.358427    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:49.397506    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:49.397523    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:49.403156    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:49.403172    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:49.415257    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:49.415269    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:49.441084    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:49.441093    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:49.483185    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:49.483200    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:49.494930    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:49.494941    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:52.513263    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:52.513306    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:52.016924    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:57.515433    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:57.515454    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:57.019126    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:57.019497    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:57.051488    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:57.051629    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:57.069040    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:57.069147    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:57.083101    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:57.083196    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:57.094708    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:57.094787    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:57.105386    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:57.105468    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:57.116576    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:57.116655    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:57.127112    4398 logs.go:276] 0 containers: []
	W0920 10:29:57.127124    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:57.127193    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:57.137780    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:57.137800    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:57.137805    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:57.175108    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:57.175122    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:57.179335    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:57.179344    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:57.220898    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:57.220912    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:57.241105    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:57.241132    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:57.257221    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:57.257237    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:57.270893    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:57.270908    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:57.284403    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:57.284417    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:57.299629    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:57.299640    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:57.311101    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:57.311113    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:57.322459    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:57.322475    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:57.334232    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:57.334245    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:57.359433    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:57.359441    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:57.393638    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:57.393651    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:57.405731    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:57.405744    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:57.422831    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:57.422841    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:57.437840    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:57.437852    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:59.952670    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:02.517535    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:02.517597    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:04.954813    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:04.955094    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:04.979265    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:04.979407    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:04.996375    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:04.996475    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:05.009439    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:05.009523    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:05.022380    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:05.022459    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:05.032416    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:05.032493    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:05.042725    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:05.042801    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:05.059916    4398 logs.go:276] 0 containers: []
	W0920 10:30:05.059928    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:05.059994    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:05.070147    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:05.070163    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:05.070170    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:05.109313    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:05.109324    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:05.144096    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:05.144108    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:05.158338    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:05.158349    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:05.175731    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:05.175742    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:05.187408    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:05.187422    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:05.191862    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:05.191869    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:05.205378    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:05.205388    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:05.243223    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:05.243233    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:05.257655    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:05.257665    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:05.269066    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:05.269079    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:05.280626    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:05.280641    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:05.296217    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:05.296227    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:05.307644    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:05.307656    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:05.319403    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:05.319417    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:05.343266    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:05.343275    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:05.357546    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:05.357558    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:07.519795    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:07.519989    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:07.546420    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:30:07.546513    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:07.558006    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:30:07.558093    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:07.568523    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:30:07.568607    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:07.579403    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:30:07.579484    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:07.589668    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:30:07.589766    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:07.600070    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:30:07.600151    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:07.622330    4251 logs.go:276] 0 containers: []
	W0920 10:30:07.622340    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:07.622408    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:07.632849    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:30:07.632864    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:30:07.632869    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:30:07.647400    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:30:07.647411    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:30:07.664980    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:30:07.664990    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:30:07.676093    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:30:07.676108    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:30:07.687428    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:30:07.687437    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:30:07.699036    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:30:07.699051    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:30:07.711165    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:07.711174    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:30:07.745238    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:07.745334    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:07.746480    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:07.746484    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:07.750651    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:07.750657    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:07.783631    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:30:07.783647    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:30:07.798318    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:30:07.798331    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:30:07.812974    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:07.812985    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:07.836052    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:30:07.836059    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:07.848683    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:07.848706    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:30:07.848731    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:30:07.848735    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:07.848739    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:07.848743    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:07.848746    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:30:07.871511    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:12.872119    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:12.872440    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:12.900492    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:12.900632    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:12.915263    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:12.915354    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:12.927040    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:12.927131    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:12.938056    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:12.938145    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:12.948392    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:12.948472    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:12.959051    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:12.959124    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:12.969821    4398 logs.go:276] 0 containers: []
	W0920 10:30:12.969832    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:12.969898    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:12.980494    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:12.980514    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:12.980520    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:12.985219    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:12.985228    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:13.000336    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:13.000346    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:13.018783    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:13.018793    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:13.038053    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:13.038064    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:13.076436    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:13.076455    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:13.112283    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:13.112298    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:13.126057    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:13.126068    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:13.141382    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:13.141392    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:13.154436    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:13.154449    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:13.175487    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:13.175499    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:13.201407    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:13.201418    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:13.213349    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:13.213361    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:13.252006    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:13.252019    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:13.270042    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:13.270058    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:13.282207    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:13.282218    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:13.294016    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:13.294027    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:17.852674    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:15.807879    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:22.855072    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:22.855562    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:22.894990    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:30:22.895156    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:22.916660    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:30:22.916783    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:22.935084    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:30:22.935179    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:22.947251    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:30:22.947323    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:22.957750    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:30:22.957826    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:22.968469    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:30:22.968546    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:22.978255    4251 logs.go:276] 0 containers: []
	W0920 10:30:22.978267    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:22.978336    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:22.988743    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:30:22.988756    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:22.988762    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:30:23.024500    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:23.024599    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:23.025818    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:30:23.025852    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:30:23.039850    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:30:23.039861    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:30:23.051807    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:30:23.051818    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:30:23.071265    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:30:23.071280    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:30:23.088395    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:30:23.088406    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:30:23.100025    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:23.100035    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:23.123854    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:30:23.123865    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:23.135434    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:23.135443    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:23.140147    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:23.140155    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:23.181220    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:30:23.181233    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:30:23.196473    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:30:23.196495    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:30:23.213507    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:30:23.213526    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:30:23.228216    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:23.228230    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:30:23.228255    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:30:23.228259    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:23.228262    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:23.228266    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:23.228268    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:30:20.810433    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:20.810631    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:20.827398    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:20.827518    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:20.841191    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:20.841266    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:20.851999    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:20.852087    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:20.863003    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:20.863084    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:20.874651    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:20.874722    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:20.885240    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:20.885322    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:20.898516    4398 logs.go:276] 0 containers: []
	W0920 10:30:20.898530    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:20.898603    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:20.908766    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:20.908788    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:20.908797    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:20.922590    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:20.922599    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:20.936944    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:20.936961    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:20.948227    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:20.948239    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:20.987760    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:20.987773    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:21.002026    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:21.002038    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:21.013947    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:21.013958    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:21.028046    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:21.028058    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:21.040472    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:21.040484    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:21.077719    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:21.077736    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:21.116719    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:21.116736    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:21.127786    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:21.127801    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:21.143448    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:21.143457    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:21.155191    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:21.155201    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:21.180292    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:21.180301    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:21.184926    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:21.184933    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:21.202627    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:21.202640    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:23.715649    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:28.717830    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:28.718018    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:28.735807    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:28.735910    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:28.747061    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:28.747147    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:28.757181    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:28.757254    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:28.768056    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:28.768140    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:28.778793    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:28.778873    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:28.789292    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:28.789374    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:28.800682    4398 logs.go:276] 0 containers: []
	W0920 10:30:28.800694    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:28.800760    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:28.812111    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:28.812132    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:28.812137    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:28.823683    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:28.823698    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:28.828309    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:28.828317    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:28.842274    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:28.842287    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:28.854018    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:28.854030    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:28.865149    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:28.865159    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:28.899825    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:28.899839    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:28.914124    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:28.914139    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:28.951399    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:28.951412    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:28.966727    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:28.966739    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:28.978883    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:28.978897    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:29.017734    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:29.017746    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:29.039273    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:29.039286    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:29.059736    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:29.059750    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:29.073693    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:29.073705    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:29.090876    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:29.090886    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:29.109322    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:29.109335    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:33.232188    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:31.635963    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:38.234469    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:38.234907    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:38.267444    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:30:38.267615    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:38.286649    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:30:38.286759    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:38.300664    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:30:38.300799    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:38.316714    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:30:38.316798    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:38.327388    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:30:38.327486    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:38.338176    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:30:38.338260    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:38.348147    4251 logs.go:276] 0 containers: []
	W0920 10:30:38.348158    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:38.348228    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:38.361773    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:30:38.361789    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:38.361795    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:38.385594    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:30:38.385606    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:38.396795    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:38.396807    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:30:38.429456    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:38.429553    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:38.430692    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:38.430697    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:38.435247    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:30:38.435256    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:30:38.448439    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:30:38.448450    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:30:38.460332    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:30:38.460342    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:30:38.472505    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:30:38.472519    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:30:38.487458    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:30:38.487468    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:30:38.499531    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:30:38.499543    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:30:38.517337    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:38.517347    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:38.553315    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:30:38.553329    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:30:38.571629    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:30:38.571641    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:30:38.585692    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:38.585705    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:30:38.585731    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:30:38.585735    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:38.585738    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:38.585742    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:38.585745    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:30:36.638099    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:36.638398    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:36.662538    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:36.662669    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:36.678614    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:36.678712    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:36.695705    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:36.695783    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:36.706501    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:36.706584    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:36.716486    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:36.716568    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:36.726968    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:36.727039    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:36.736777    4398 logs.go:276] 0 containers: []
	W0920 10:30:36.736789    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:36.736850    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:36.747380    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:36.747404    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:36.747409    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:36.782291    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:36.782304    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:36.797256    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:36.797272    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:36.810818    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:36.810838    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:36.821856    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:36.821866    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:36.846423    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:36.846432    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:36.850322    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:36.850331    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:36.864132    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:36.864143    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:36.876962    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:36.876979    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:36.891182    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:36.891194    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:36.929079    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:36.929091    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:36.942713    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:36.942725    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:36.954150    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:36.954160    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:36.971115    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:36.971127    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:36.982100    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:36.982116    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:37.019520    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:37.019533    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:37.033735    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:37.033750    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:39.553863    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:44.554495    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:44.554638    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:44.567449    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:44.567538    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:44.577640    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:44.577726    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:44.588294    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:44.588372    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:44.598726    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:44.598803    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:44.609262    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:44.609343    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:44.628914    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:44.629000    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:44.640317    4398 logs.go:276] 0 containers: []
	W0920 10:30:44.640330    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:44.640396    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:44.650795    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:44.650815    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:44.650820    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:44.688574    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:44.688585    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:44.727790    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:44.727807    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:44.743522    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:44.743533    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:44.755978    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:44.755993    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:44.773755    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:44.773767    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:44.789654    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:44.789665    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:44.803138    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:44.803151    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:44.807147    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:44.807153    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:44.841433    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:44.841443    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:44.855671    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:44.855683    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:44.867083    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:44.867094    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:44.882870    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:44.882881    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:44.905712    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:44.905720    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:44.919275    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:44.919286    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:44.934306    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:44.934317    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:44.946279    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:44.946292    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:48.587648    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:47.459992    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:53.589795    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:53.590079    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:53.617131    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:30:53.617250    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:53.633156    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:30:53.633252    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:53.645477    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:30:53.645559    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:53.655868    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:30:53.655953    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:53.669371    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:30:53.669459    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:53.679650    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:30:53.679727    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:53.689810    4251 logs.go:276] 0 containers: []
	W0920 10:30:53.689823    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:53.689895    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:53.703125    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:30:53.703140    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:53.703146    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:53.708351    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:30:53.708360    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:30:53.722748    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:30:53.722757    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:30:53.734901    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:30:53.734912    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:30:53.749988    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:30:53.750000    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:53.761426    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:53.761438    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:30:53.795565    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:53.795662    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:53.796882    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:30:53.796889    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:30:53.810757    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:30:53.810767    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:30:53.822775    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:30:53.822786    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:30:53.834436    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:30:53.834450    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:30:53.851288    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:30:53.851298    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:30:53.862648    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:53.862657    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:53.887231    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:53.887238    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:53.931235    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:53.931246    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:30:53.931275    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:30:53.931280    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:30:53.931284    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:30:53.931288    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:30:53.931291    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:30:52.461820    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:52.462409    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:52.503399    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:52.503575    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:52.522095    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:52.522194    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:52.536280    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:52.536367    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:52.547838    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:52.547918    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:52.558415    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:52.558501    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:52.568792    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:52.568879    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:52.579837    4398 logs.go:276] 0 containers: []
	W0920 10:30:52.579848    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:52.579911    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:52.590249    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:52.590266    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:52.590271    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:52.607496    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:52.607508    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:52.619393    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:52.619408    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:52.631008    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:52.631023    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:52.667255    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:52.667263    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:52.702500    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:52.702512    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:52.719515    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:52.719529    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:52.733017    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:52.733030    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:52.747353    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:52.747366    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:52.761477    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:52.761487    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:52.765848    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:52.765854    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:52.789306    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:52.789317    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:52.831158    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:52.831170    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:52.847920    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:52.847935    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:52.859227    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:52.859237    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:52.883619    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:52.883633    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:52.894859    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:52.894872    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:55.408264    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:00.410482    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:00.410830    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:00.440019    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:00.440162    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:00.456989    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:00.457109    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:03.935181    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:00.470717    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:00.470811    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:00.483053    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:00.483146    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:00.493414    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:00.493498    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:00.504601    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:00.504684    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:00.515536    4398 logs.go:276] 0 containers: []
	W0920 10:31:00.515549    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:00.515621    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:00.526593    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:00.526616    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:00.526622    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:00.537729    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:00.537742    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:00.552738    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:00.552751    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:00.570481    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:00.570492    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:00.585638    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:00.585654    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:00.597105    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:00.597120    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:00.608493    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:00.608505    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:00.627829    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:00.627845    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:00.651054    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:00.651061    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:00.663149    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:00.663160    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:00.676884    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:00.676894    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:00.717351    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:00.717362    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:00.731396    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:00.731406    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:00.748733    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:00.748745    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:00.787872    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:00.787883    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:00.792353    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:00.792360    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:00.831471    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:00.831482    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:03.346590    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:08.937380    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:08.937500    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:08.952122    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:31:08.952205    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:08.963155    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:31:08.963241    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:08.974190    4251 logs.go:276] 2 containers: [a9ee06323540 f9b4c92961ad]
	I0920 10:31:08.974268    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:08.985011    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:31:08.985098    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:08.995235    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:31:08.995322    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:09.006079    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:31:09.006161    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:09.016047    4251 logs.go:276] 0 containers: []
	W0920 10:31:09.016058    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:09.016128    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:09.037870    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:31:09.037885    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:31:09.037891    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:31:09.049784    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:09.049795    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:31:09.084387    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:09.084484    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:09.085669    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:09.085674    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:09.089813    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:31:09.089822    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:31:09.105493    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:31:09.105507    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:31:09.116950    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:31:09.116961    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:31:09.128746    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:31:09.128759    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:31:09.146171    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:09.146185    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:09.183615    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:31:09.183626    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:31:09.198022    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:31:09.198034    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:31:09.209537    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:31:09.209547    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:31:09.224634    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:09.224645    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:09.247982    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:31:09.247992    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:09.259993    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:09.260006    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:31:09.260034    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:31:09.260039    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:09.260042    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:09.260045    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:09.260049    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:31:08.349147    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:08.349481    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:08.375132    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:08.375263    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:08.391117    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:08.391210    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:08.403980    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:08.404072    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:08.414823    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:08.414905    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:08.425221    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:08.425326    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:08.436524    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:08.436607    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:08.446837    4398 logs.go:276] 0 containers: []
	W0920 10:31:08.446847    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:08.446916    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:08.457414    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:08.457433    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:08.457440    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:08.471604    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:08.471615    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:08.488775    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:08.488790    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:08.500488    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:08.500498    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:08.524264    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:08.524273    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:08.540161    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:08.540174    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:08.552318    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:08.552328    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:08.590800    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:08.590810    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:08.612910    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:08.612923    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:08.624474    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:08.624487    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:08.659653    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:08.659667    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:08.663848    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:08.663857    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:08.675869    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:08.675884    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:08.691195    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:08.691205    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:08.703093    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:08.703104    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:08.717128    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:08.717137    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:08.729371    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:08.729381    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:11.270258    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:19.263919    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:16.272568    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:16.272837    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:16.290344    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:16.290452    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:16.304192    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:16.304287    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:16.315671    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:16.315754    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:16.325958    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:16.326042    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:16.336246    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:16.336331    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:16.347139    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:16.347223    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:16.366134    4398 logs.go:276] 0 containers: []
	W0920 10:31:16.366147    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:16.366221    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:16.376559    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:16.376581    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:16.376586    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:16.387830    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:16.387846    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:16.403154    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:16.403164    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:16.421750    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:16.421767    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:16.460181    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:16.460189    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:16.474447    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:16.474458    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:16.488368    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:16.488379    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:16.499729    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:16.499739    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:16.517057    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:16.517069    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:16.553233    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:16.553245    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:16.598515    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:16.598525    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:16.612940    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:16.612955    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:16.627038    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:16.627055    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:16.644394    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:16.644405    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:16.648849    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:16.648859    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:16.660269    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:16.660280    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:16.671936    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:16.671946    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:19.197698    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:24.265991    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:24.266099    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:24.277237    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:31:24.277322    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:24.288540    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:31:24.288628    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:24.300659    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:31:24.300749    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:24.311932    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:31:24.312020    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:24.323508    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:31:24.323588    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:24.335373    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:31:24.335460    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:24.346955    4251 logs.go:276] 0 containers: []
	W0920 10:31:24.346969    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:24.347053    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:24.358601    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:31:24.358616    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:31:24.358622    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:31:24.373486    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:31:24.373498    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:31:24.393143    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:31:24.393159    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:31:24.405072    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:31:24.405086    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:24.417630    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:24.417642    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:24.423178    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:31:24.423188    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:31:24.438957    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:31:24.438965    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:31:24.451988    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:31:24.451997    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:31:24.464324    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:24.464333    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:24.488850    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:24.488862    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:31:24.523456    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:24.523557    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:24.524776    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:31:24.524785    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:31:24.536581    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:24.536592    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:24.578438    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:31:24.578450    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:31:24.597522    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:31:24.597543    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:31:24.609802    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:31:24.609814    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:31:24.629532    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:24.629545    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:31:24.629576    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:31:24.629581    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:24.629584    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:24.629588    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:24.629591    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:31:24.199166    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:24.199382    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:24.213690    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:24.213793    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:24.225908    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:24.225999    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:24.236333    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:24.236408    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:24.246509    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:24.246591    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:24.256995    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:24.257079    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:24.268010    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:24.268084    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:24.279314    4398 logs.go:276] 0 containers: []
	W0920 10:31:24.279329    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:24.279396    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:24.290646    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:24.290663    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:24.290669    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:24.294893    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:24.294904    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:24.338241    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:24.338252    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:24.357196    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:24.357210    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:24.369851    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:24.369864    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:24.382216    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:24.382227    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:24.406642    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:24.406651    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:24.421580    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:24.421594    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:24.437905    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:24.437918    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:24.450084    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:24.450098    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:24.462703    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:24.462716    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:24.502202    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:24.502214    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:24.542422    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:24.542438    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:24.554793    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:24.554812    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:24.571166    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:24.571179    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:24.588803    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:24.588822    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:24.615860    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:24.615876    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:27.136382    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:34.633464    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:32.137122    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:32.137325    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:32.148756    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:32.148837    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:32.159188    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:32.159275    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:32.169210    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:32.169280    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:32.179409    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:32.179491    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:32.190233    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:32.190314    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:32.200569    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:32.200651    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:32.211296    4398 logs.go:276] 0 containers: []
	W0920 10:31:32.211308    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:32.211380    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:32.223207    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:32.223226    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:32.223232    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:32.261327    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:32.261341    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:32.278213    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:32.278230    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:32.290230    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:32.290242    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:32.304164    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:32.304175    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:32.319089    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:32.319099    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:32.338228    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:32.338239    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:32.362579    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:32.362589    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:32.374464    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:32.374474    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:32.378571    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:32.378578    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:32.396164    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:32.396177    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:32.437322    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:32.437338    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:32.448682    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:32.448694    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:32.464192    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:32.464207    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:32.479539    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:32.479554    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:32.490820    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:32.490833    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:32.502279    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:32.502290    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:35.042177    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:39.635584    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:39.635751    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:39.646699    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:31:39.646794    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:39.657459    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:31:39.657536    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:39.668193    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:31:39.668276    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:39.678242    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:31:39.678323    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:39.689205    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:31:39.689317    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:39.699702    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:31:39.699784    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:39.710158    4251 logs.go:276] 0 containers: []
	W0920 10:31:39.710172    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:39.710240    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:39.720604    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:31:39.720621    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:31:39.720628    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:31:39.734943    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:31:39.734953    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:31:39.746657    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:31:39.746668    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:31:39.764927    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:39.764937    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:31:39.798167    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:39.798272    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:39.799491    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:31:39.799498    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:31:39.811552    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:31:39.811563    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:31:39.823772    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:31:39.823788    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:31:39.838380    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:39.838390    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:39.843432    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:39.843438    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:39.881684    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:31:39.881700    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:31:39.895855    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:31:39.895865    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:31:39.907465    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:31:39.907475    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:31:39.919454    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:31:39.919465    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:31:39.936689    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:39.936699    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:39.959694    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:31:39.959702    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:39.971845    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:39.971856    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:31:39.971882    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:31:39.971886    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:39.971889    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:39.971893    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:39.971896    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:31:40.042795    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:40.042964    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:40.053659    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:40.053742    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:40.064580    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:40.064696    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:40.074976    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:40.075060    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:40.085491    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:40.085569    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:40.096289    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:40.096371    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:40.106993    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:40.107078    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:40.121548    4398 logs.go:276] 0 containers: []
	W0920 10:31:40.121559    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:40.121624    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:40.132395    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:40.132415    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:40.132421    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:40.150721    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:40.150734    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:40.168185    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:40.168199    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:40.182121    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:40.182132    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:40.216215    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:40.216227    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:40.230450    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:40.230461    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:40.268796    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:40.268806    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:40.283923    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:40.283934    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:40.295755    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:40.295767    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:40.307212    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:40.307224    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:40.318046    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:40.318058    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:40.338475    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:40.338486    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:40.354499    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:40.354511    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:40.378683    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:40.378694    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:40.390379    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:40.390393    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:40.430552    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:40.430561    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:40.434726    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:40.434736    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:42.948920    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:49.975835    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:47.951130    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:47.951296    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:47.962305    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:47.962378    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:47.975683    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:47.975770    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:47.986883    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:47.986969    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:47.997574    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:47.997660    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:48.007947    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:48.008032    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:48.019611    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:48.019692    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:48.034553    4398 logs.go:276] 0 containers: []
	W0920 10:31:48.034564    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:48.034632    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:48.044607    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:48.044625    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:48.044631    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:48.058914    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:48.058925    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:48.070359    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:48.070371    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:48.082014    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:48.082027    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:48.100318    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:48.100330    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:48.115115    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:48.115130    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:48.153461    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:48.153470    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:48.157425    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:48.157432    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:48.175466    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:48.175482    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:48.192991    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:48.193004    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:48.215388    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:48.215396    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:48.226974    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:48.226984    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:48.262239    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:48.262251    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:48.277105    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:48.277116    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:48.288751    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:48.288763    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:48.301246    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:48.301261    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:48.340155    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:48.340170    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:54.977973    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:54.978125    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:54.989131    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:31:54.989227    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:55.000860    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:31:55.000949    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:55.012091    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:31:55.012180    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:55.022948    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:31:55.023035    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:55.033445    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:31:55.033525    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:55.045361    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:31:55.045441    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:55.056370    4251 logs.go:276] 0 containers: []
	W0920 10:31:55.056382    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:55.056453    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:55.066659    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:31:55.066677    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:55.066683    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:50.854534    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:55.113411    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:31:55.113422    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:31:55.129073    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:31:55.129084    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:31:55.140434    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:31:55.140445    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:31:55.159000    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:31:55.159012    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:31:55.174177    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:31:55.174187    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:31:55.193646    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:31:55.193659    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:31:55.205585    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:31:55.205599    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:31:55.217376    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:31:55.217389    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:31:55.228823    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:55.228834    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:55.252143    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:31:55.252154    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:55.264596    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:55.264609    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:31:55.299984    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:55.300088    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:55.301315    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:55.301328    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:55.306747    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:31:55.306757    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:31:55.318924    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:31:55.318937    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:31:55.331566    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:55.331581    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:31:55.331610    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:31:55.331616    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:31:55.331620    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:31:55.331624    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:31:55.331651    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:31:55.856659    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:55.856978    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:55.884372    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:55.884520    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:55.903383    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:55.903477    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:55.916475    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:55.916563    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:55.927519    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:55.927610    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:55.938309    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:55.938391    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:55.949194    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:55.949280    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:55.959222    4398 logs.go:276] 0 containers: []
	W0920 10:31:55.959233    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:55.959297    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:55.972928    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:55.972946    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:55.972973    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:55.987035    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:55.987048    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:55.998936    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:55.998949    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:56.017365    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:56.017375    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:56.032134    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:56.032144    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:56.077144    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:56.077155    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:56.091128    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:56.091140    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:56.103188    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:56.103199    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:56.127084    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:56.127092    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:56.131066    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:56.131073    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:56.149055    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:56.149071    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:56.163686    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:56.163707    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:56.181538    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:56.181559    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:56.194576    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:56.194592    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:56.206045    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:56.206057    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:56.217575    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:56.217587    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:56.255658    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:56.255665    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:58.797925    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:03.799377    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:03.799963    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:03.840172    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:32:03.840341    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:03.861837    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:32:03.861972    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:03.876683    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:32:03.876774    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:03.889387    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:32:03.889475    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:03.900133    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:32:03.900216    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:03.911453    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:32:03.911537    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:03.921930    4398 logs.go:276] 0 containers: []
	W0920 10:32:03.921947    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:03.922027    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:03.936683    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:32:03.936703    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:03.936709    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:03.973279    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:32:03.973292    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:32:03.988021    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:32:03.988032    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:04.000368    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:32:04.000380    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:32:04.012733    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:32:04.012748    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:32:04.028601    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:32:04.028611    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:32:04.040750    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:32:04.040765    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:32:04.053100    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:04.053112    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:04.077045    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:32:04.077061    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:32:04.091104    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:32:04.091120    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:32:04.105639    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:32:04.105648    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:32:04.118132    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:32:04.118148    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:32:04.135620    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:04.135630    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:32:04.174690    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:04.174701    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:04.178893    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:32:04.178899    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:32:04.192418    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:32:04.192429    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:32:04.231822    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:32:04.231834    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:32:05.335495    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:06.744942    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:10.337614    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:10.337769    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:10.353849    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:32:10.353944    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:10.366442    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:32:10.366536    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:10.379829    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:32:10.379910    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:10.390701    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:32:10.390785    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:10.401467    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:32:10.401561    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:10.414226    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:32:10.414312    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:10.424880    4251 logs.go:276] 0 containers: []
	W0920 10:32:10.424890    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:10.424960    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:10.434828    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:32:10.434842    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:32:10.434847    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:10.446272    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:32:10.446287    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:32:10.460632    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:32:10.460647    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:32:10.474279    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:32:10.474295    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:32:10.485707    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:32:10.485720    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:32:10.501550    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:10.501561    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:10.526484    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:10.526492    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:32:10.559604    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:10.559703    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:10.560877    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:10.560884    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:10.595894    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:32:10.595905    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:32:10.612502    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:32:10.612511    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:32:10.630614    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:10.630623    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:10.634897    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:32:10.634903    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:32:10.646648    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:32:10.646658    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:32:10.659194    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:32:10.659232    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:32:10.676656    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:32:10.676666    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:32:10.691457    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:10.691465    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:32:10.691490    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:32:10.691495    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:10.691498    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:10.691501    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:10.691504    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:11.747506    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:11.747767    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:11.765059    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:32:11.765171    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:11.778326    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:32:11.778415    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:11.788997    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:32:11.789083    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:11.801471    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:32:11.801557    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:11.812410    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:32:11.812497    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:11.823303    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:32:11.823383    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:11.837616    4398 logs.go:276] 0 containers: []
	W0920 10:32:11.837629    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:11.837698    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:11.848507    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:32:11.848527    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:11.848533    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:11.852827    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:32:11.852837    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:32:11.865223    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:32:11.865235    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:32:11.880999    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:32:11.881011    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:32:11.898269    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:11.898283    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:11.921321    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:11.921328    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:32:11.960174    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:32:11.960185    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:32:11.997769    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:32:11.997779    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:32:12.013550    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:32:12.013563    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:12.027184    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:12.027202    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:12.081113    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:32:12.081126    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:32:12.097695    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:32:12.097709    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:32:12.110485    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:32:12.110500    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:32:12.121861    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:32:12.121874    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:32:12.145458    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:32:12.145467    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:32:12.157279    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:32:12.157293    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:32:12.177125    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:32:12.177140    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:32:14.690932    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:19.693012    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:19.693280    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:19.723861    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:32:19.723990    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:19.749184    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:32:19.749262    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:19.768982    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:32:19.769061    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:19.779567    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:32:19.779644    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:19.789977    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:32:19.790064    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:19.802269    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:32:19.802351    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:19.812189    4398 logs.go:276] 0 containers: []
	W0920 10:32:19.812201    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:19.812267    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:19.822813    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:32:19.822831    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:32:19.822836    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:32:19.837067    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:32:19.837078    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:32:19.850072    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:32:19.850084    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:32:19.861007    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:32:19.861020    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:19.873708    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:19.873717    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:19.877636    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:19.877645    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:19.912388    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:32:19.912403    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:32:19.927044    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:32:19.927055    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:32:19.944283    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:32:19.944296    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:32:19.957899    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:32:19.957914    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:32:19.973210    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:32:19.973223    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:32:19.988516    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:32:19.988526    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:32:20.000248    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:32:20.000259    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:32:20.014404    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:20.014415    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:20.037378    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:20.037394    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:32:20.074563    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:32:20.074574    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:32:20.117198    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:32:20.117210    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:32:20.695375    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:22.630955    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:27.633136    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:27.633223    4398 kubeadm.go:597] duration metric: took 4m3.973096041s to restartPrimaryControlPlane
	W0920 10:32:27.633288    4398 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:32:27.633314    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:32:28.614739    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:32:28.619961    4398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:32:28.622996    4398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:32:28.625675    4398 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:32:28.625682    4398 kubeadm.go:157] found existing configuration files:
	
	I0920 10:32:28.625707    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/admin.conf
	I0920 10:32:28.628176    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:32:28.628208    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:32:28.631241    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/kubelet.conf
	I0920 10:32:28.634330    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:32:28.634356    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:32:28.636800    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/controller-manager.conf
	I0920 10:32:28.639446    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:32:28.639465    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:32:28.642629    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/scheduler.conf
	I0920 10:32:28.645077    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:32:28.645103    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:32:28.647687    4398 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:32:28.665316    4398 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:32:28.665370    4398 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:32:28.715500    4398 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:32:28.715551    4398 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:32:28.715604    4398 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:32:28.764379    4398 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:32:28.767516    4398 out.go:235]   - Generating certificates and keys ...
	I0920 10:32:28.767549    4398 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:32:28.767582    4398 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:32:28.767620    4398 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:32:28.767660    4398 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:32:28.767697    4398 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:32:28.767731    4398 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:32:28.767764    4398 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:32:28.767819    4398 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:32:28.767861    4398 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:32:28.767917    4398 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:32:28.767951    4398 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:32:28.767991    4398 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:32:28.957095    4398 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:32:29.062088    4398 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:32:29.244712    4398 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:32:29.347698    4398 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:32:29.379233    4398 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:32:29.379632    4398 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:32:29.379693    4398 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:32:29.453074    4398 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:32:25.697134    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:25.697281    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:25.711697    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:32:25.711804    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:25.723940    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:32:25.724027    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:25.734294    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:32:25.734379    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:25.744772    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:32:25.744857    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:25.755529    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:32:25.755614    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:25.766527    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:32:25.766614    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:25.776744    4251 logs.go:276] 0 containers: []
	W0920 10:32:25.776757    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:25.776827    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:25.787447    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:32:25.787467    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:32:25.787472    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:32:25.799789    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:32:25.799801    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:32:25.812127    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:32:25.812139    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:32:25.825339    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:32:25.825352    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:25.837876    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:25.837889    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:25.842319    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:25.842325    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:25.876953    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:32:25.876965    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:32:25.894155    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:32:25.894167    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:32:25.908536    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:32:25.908545    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:32:25.927590    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:25.927602    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:25.952014    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:25.952022    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:32:25.986214    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:25.986311    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:25.987496    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:32:25.987501    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:32:26.000113    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:32:26.000124    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:32:26.015768    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:32:26.015779    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:32:26.033230    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:32:26.033244    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:32:26.045525    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:26.045539    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:32:26.045566    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:32:26.045570    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:26.045575    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:26.045578    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:26.045580    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:29.461187    4398 out.go:235]   - Booting up control plane ...
	I0920 10:32:29.461242    4398 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:32:29.461280    4398 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:32:29.461327    4398 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:32:29.461371    4398 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:32:29.461462    4398 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:32:34.552928    4398 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001336 seconds
	I0920 10:32:34.553025    4398 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:32:34.558811    4398 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:32:35.066822    4398 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:32:35.066931    4398 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-593000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:32:35.570733    4398 kubeadm.go:310] [bootstrap-token] Using token: v0pk0r.yk1w2751tvqi9mna
	I0920 10:32:35.576334    4398 out.go:235]   - Configuring RBAC rules ...
	I0920 10:32:35.576398    4398 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:32:35.576440    4398 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:32:35.579899    4398 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:32:35.580813    4398 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:32:35.581679    4398 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:32:35.582431    4398 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:32:35.586224    4398 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:32:35.757040    4398 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:32:35.973955    4398 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:32:35.974590    4398 kubeadm.go:310] 
	I0920 10:32:35.974629    4398 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:32:35.974633    4398 kubeadm.go:310] 
	I0920 10:32:35.974670    4398 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:32:35.974696    4398 kubeadm.go:310] 
	I0920 10:32:35.974709    4398 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:32:35.974742    4398 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:32:35.974772    4398 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:32:35.974775    4398 kubeadm.go:310] 
	I0920 10:32:35.974804    4398 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:32:35.974833    4398 kubeadm.go:310] 
	I0920 10:32:35.974896    4398 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:32:35.974901    4398 kubeadm.go:310] 
	I0920 10:32:35.974926    4398 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:32:35.975036    4398 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:32:35.975205    4398 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:32:35.975225    4398 kubeadm.go:310] 
	I0920 10:32:35.975302    4398 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:32:35.975510    4398 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:32:35.975534    4398 kubeadm.go:310] 
	I0920 10:32:35.975722    4398 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v0pk0r.yk1w2751tvqi9mna \
	I0920 10:32:35.975868    4398 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a \
	I0920 10:32:35.975895    4398 kubeadm.go:310] 	--control-plane 
	I0920 10:32:35.975901    4398 kubeadm.go:310] 
	I0920 10:32:35.975994    4398 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:32:35.976066    4398 kubeadm.go:310] 
	I0920 10:32:35.976228    4398 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v0pk0r.yk1w2751tvqi9mna \
	I0920 10:32:35.976289    4398 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a 
	I0920 10:32:35.976353    4398 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:32:35.976359    4398 cni.go:84] Creating CNI manager for ""
	I0920 10:32:35.976368    4398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:32:35.984565    4398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:32:35.988625    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:32:35.991883    4398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:32:35.996385    4398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:32:35.996444    4398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-593000 minikube.k8s.io/updated_at=2024_09_20T10_32_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=stopped-upgrade-593000 minikube.k8s.io/primary=true
	I0920 10:32:35.996445    4398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:32:36.040327    4398 kubeadm.go:1113] duration metric: took 43.922042ms to wait for elevateKubeSystemPrivileges
	I0920 10:32:36.040348    4398 ops.go:34] apiserver oom_adj: -16
	I0920 10:32:36.040380    4398 kubeadm.go:394] duration metric: took 4m12.301018208s to StartCluster
	I0920 10:32:36.040392    4398 settings.go:142] acquiring lock: {Name:mkc8690df96bb5b3a10e10e028bcb5cdae886c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:36.040478    4398 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:32:36.040942    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:36.041168    4398 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:32:36.041191    4398 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:32:36.041249    4398 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-593000"
	I0920 10:32:36.041258    4398 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-593000"
	W0920 10:32:36.041261    4398 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:32:36.041272    4398 host.go:66] Checking if "stopped-upgrade-593000" exists ...
	I0920 10:32:36.041283    4398 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-593000"
	I0920 10:32:36.041294    4398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-593000"
	I0920 10:32:36.041296    4398 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:32:36.044525    4398 out.go:177] * Verifying Kubernetes components...
	I0920 10:32:36.045158    4398 kapi.go:59] client config for stopped-upgrade-593000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.key", CAFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102212030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:32:36.047855    4398 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-593000"
	W0920 10:32:36.047861    4398 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:32:36.047870    4398 host.go:66] Checking if "stopped-upgrade-593000" exists ...
	I0920 10:32:36.048436    4398 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:32:36.048442    4398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:32:36.048447    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:32:36.050577    4398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:32:36.143516    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:36.054516    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:32:36.058576    4398 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:32:36.058583    4398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:32:36.058588    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:32:36.130291    4398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:32:36.135854    4398 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:32:36.135905    4398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:32:36.141247    4398 api_server.go:72] duration metric: took 100.066792ms to wait for apiserver process to appear ...
	I0920 10:32:36.141255    4398 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:32:36.141263    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:36.153200    4398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:32:36.168486    4398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:32:36.544781    4398 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:32:36.544793    4398 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:32:41.145602    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:41.145775    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:41.165680    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:32:41.165788    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:41.182716    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:32:41.182801    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:41.194643    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:32:41.194737    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:41.206240    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:32:41.206319    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:41.221630    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:32:41.221720    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:41.232409    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:32:41.232488    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:41.243956    4251 logs.go:276] 0 containers: []
	W0920 10:32:41.243968    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:41.244032    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:41.254548    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:32:41.254568    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:32:41.254572    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:32:41.266402    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:32:41.266414    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:41.278235    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:32:41.278250    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:32:41.298616    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:32:41.298630    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:32:41.311521    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:32:41.311533    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:32:41.333326    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:32:41.333335    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:32:41.344999    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:32:41.345013    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:32:41.359047    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:32:41.359061    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:32:41.373924    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:32:41.373937    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:32:41.392018    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:41.392028    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:32:41.426171    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:41.426267    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:41.427407    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:41.427412    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:41.432158    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:41.432165    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:41.465813    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:32:41.465823    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:32:41.477533    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:32:41.477546    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:32:41.488922    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:41.488932    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:41.513324    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:41.513333    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:32:41.513357    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:32:41.513362    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:41.513365    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:41.513368    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:41.513371    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:41.141575    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:41.141619    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:46.143288    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:46.143339    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:51.517451    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:51.143636    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:51.143666    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:56.519712    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:56.519899    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:56.531491    4251 logs.go:276] 1 containers: [511d63b7e1e2]
	I0920 10:32:56.531563    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:56.541570    4251 logs.go:276] 1 containers: [3cf5fefcb830]
	I0920 10:32:56.541658    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:56.552771    4251 logs.go:276] 4 containers: [7a267dac75db 6ad5ab733c7a a9ee06323540 f9b4c92961ad]
	I0920 10:32:56.552860    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:56.563403    4251 logs.go:276] 1 containers: [79c4d2dffd49]
	I0920 10:32:56.563487    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:56.573519    4251 logs.go:276] 1 containers: [8a6f8581ccdf]
	I0920 10:32:56.573597    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:56.584154    4251 logs.go:276] 1 containers: [cffbfe00db18]
	I0920 10:32:56.584229    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:56.594657    4251 logs.go:276] 0 containers: []
	W0920 10:32:56.594673    4251 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:56.594737    4251 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:56.605509    4251 logs.go:276] 1 containers: [8a4f38f0255e]
	I0920 10:32:56.605530    4251 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:56.605536    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 10:32:56.640505    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:56.640604    4251 logs.go:138] Found kubelet problem: Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:56.641819    4251 logs.go:123] Gathering logs for coredns [7a267dac75db] ...
	I0920 10:32:56.641827    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a267dac75db"
	I0920 10:32:56.662086    4251 logs.go:123] Gathering logs for kube-proxy [8a6f8581ccdf] ...
	I0920 10:32:56.662098    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6f8581ccdf"
	I0920 10:32:56.682302    4251 logs.go:123] Gathering logs for kube-controller-manager [cffbfe00db18] ...
	I0920 10:32:56.682314    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffbfe00db18"
	I0920 10:32:56.704214    4251 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:56.704225    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:56.708815    4251 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:56.708822    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:56.742862    4251 logs.go:123] Gathering logs for etcd [3cf5fefcb830] ...
	I0920 10:32:56.742873    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf5fefcb830"
	I0920 10:32:56.757524    4251 logs.go:123] Gathering logs for coredns [f9b4c92961ad] ...
	I0920 10:32:56.757538    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b4c92961ad"
	I0920 10:32:56.769894    4251 logs.go:123] Gathering logs for kube-scheduler [79c4d2dffd49] ...
	I0920 10:32:56.769905    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79c4d2dffd49"
	I0920 10:32:56.785140    4251 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:56.785151    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:56.809462    4251 logs.go:123] Gathering logs for kube-apiserver [511d63b7e1e2] ...
	I0920 10:32:56.809470    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511d63b7e1e2"
	I0920 10:32:56.823201    4251 logs.go:123] Gathering logs for storage-provisioner [8a4f38f0255e] ...
	I0920 10:32:56.823210    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a4f38f0255e"
	I0920 10:32:56.834290    4251 logs.go:123] Gathering logs for coredns [6ad5ab733c7a] ...
	I0920 10:32:56.834304    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad5ab733c7a"
	I0920 10:32:56.850288    4251 logs.go:123] Gathering logs for coredns [a9ee06323540] ...
	I0920 10:32:56.850299    4251 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ee06323540"
	I0920 10:32:56.862028    4251 logs.go:123] Gathering logs for container status ...
	I0920 10:32:56.862042    4251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:56.874035    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:56.874049    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 10:32:56.874075    4251 out.go:270] X Problems detected in kubelet:
	W0920 10:32:56.874080    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	W0920 10:32:56.874085    4251 out.go:270]   Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	I0920 10:32:56.874088    4251 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:56.874095    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:56.143977    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:56.144004    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:01.144427    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:01.144451    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:06.144928    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:06.144964    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:33:06.547412    4398 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:33:06.551841    4398 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:33:06.878191    4251 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:06.559622    4398 addons.go:510] duration metric: took 30.5185825s for enable addons: enabled=[storage-provisioner]
	I0920 10:33:11.880615    4251 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:11.884880    4251 out.go:201] 
	W0920 10:33:11.888871    4251 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 10:33:11.888878    4251 out.go:270] * 
	W0920 10:33:11.889285    4251 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:33:11.899758    4251 out.go:201] 
	I0920 10:33:11.145758    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:11.145781    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:16.146644    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:16.146670    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:21.147776    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:21.147821    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-09-20 17:24:08 UTC, ends at Fri 2024-09-20 17:33:28 UTC. --
	Sep 20 17:33:08 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:08Z" level=error msg="ContainerStats resp: {0x40004e7340 linux}"
	Sep 20 17:33:08 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:08Z" level=error msg="ContainerStats resp: {0x40004e7880 linux}"
	Sep 20 17:33:08 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:08Z" level=error msg="ContainerStats resp: {0x40004e7a80 linux}"
	Sep 20 17:33:09 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:09Z" level=error msg="ContainerStats resp: {0x40007f1340 linux}"
	Sep 20 17:33:10 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:10Z" level=error msg="ContainerStats resp: {0x40004e6d80 linux}"
	Sep 20 17:33:10 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:10Z" level=error msg="ContainerStats resp: {0x40004e7880 linux}"
	Sep 20 17:33:10 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:10Z" level=error msg="ContainerStats resp: {0x40007a63c0 linux}"
	Sep 20 17:33:10 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:10Z" level=error msg="ContainerStats resp: {0x40007a70c0 linux}"
	Sep 20 17:33:10 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:10Z" level=error msg="ContainerStats resp: {0x4000890700 linux}"
	Sep 20 17:33:10 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:10Z" level=error msg="ContainerStats resp: {0x40008908c0 linux}"
	Sep 20 17:33:10 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:10Z" level=error msg="ContainerStats resp: {0x4000890e40 linux}"
	Sep 20 17:33:11 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 20 17:33:16 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 20 17:33:20 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:20Z" level=error msg="ContainerStats resp: {0x40000b8540 linux}"
	Sep 20 17:33:20 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:20Z" level=error msg="ContainerStats resp: {0x4000826900 linux}"
	Sep 20 17:33:21 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:21Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 20 17:33:21 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:21Z" level=error msg="ContainerStats resp: {0x40008268c0 linux}"
	Sep 20 17:33:22 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:22Z" level=error msg="ContainerStats resp: {0x4000827c00 linux}"
	Sep 20 17:33:22 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:22Z" level=error msg="ContainerStats resp: {0x40005ae080 linux}"
	Sep 20 17:33:22 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:22Z" level=error msg="ContainerStats resp: {0x4000928480 linux}"
	Sep 20 17:33:22 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:22Z" level=error msg="ContainerStats resp: {0x40005aea80 linux}"
	Sep 20 17:33:22 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:22Z" level=error msg="ContainerStats resp: {0x4000928fc0 linux}"
	Sep 20 17:33:22 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:22Z" level=error msg="ContainerStats resp: {0x4000929380 linux}"
	Sep 20 17:33:22 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:22Z" level=error msg="ContainerStats resp: {0x4000929ac0 linux}"
	Sep 20 17:33:26 running-upgrade-444000 cri-dockerd[3031]: time="2024-09-20T17:33:26Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	726c43ea09f44       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   1c3cef7fb7d31
	1430e5ee6c92d       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   125485e32325c
	7a267dac75db7       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   125485e32325c
	6ad5ab733c7aa       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   1c3cef7fb7d31
	8a6f8581ccdfd       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   9377960f12536
	8a4f38f0255e5       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   2242b7d984719
	79c4d2dffd499       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   052bb77567d00
	511d63b7e1e23       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   923352eb590d7
	cffbfe00db189       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   c178de63c2b62
	3cf5fefcb830f       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   92804ebac38eb
	
	
	==> coredns [1430e5ee6c92] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5331076840816235051.1444150111012261737. HINFO: read udp 10.244.0.3:35109->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5331076840816235051.1444150111012261737. HINFO: read udp 10.244.0.3:38596->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5331076840816235051.1444150111012261737. HINFO: read udp 10.244.0.3:40601->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5331076840816235051.1444150111012261737. HINFO: read udp 10.244.0.3:40346->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5331076840816235051.1444150111012261737. HINFO: read udp 10.244.0.3:58649->10.0.2.3:53: i/o timeout
	
	
	==> coredns [6ad5ab733c7a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:53909->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:45362->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:36489->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:36160->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:50253->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:34733->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:33725->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:46405->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:44342->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4402982925896259090.1006364705422164213. HINFO: read udp 10.244.0.2:49218->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [726c43ea09f4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4027769999503285110.5069022150125569973. HINFO: read udp 10.244.0.2:49960->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4027769999503285110.5069022150125569973. HINFO: read udp 10.244.0.2:42677->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4027769999503285110.5069022150125569973. HINFO: read udp 10.244.0.2:42813->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4027769999503285110.5069022150125569973. HINFO: read udp 10.244.0.2:51011->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4027769999503285110.5069022150125569973. HINFO: read udp 10.244.0.2:33749->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7a267dac75db] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:44889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:35008->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:52770->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:32882->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:51610->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:60627->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:48091->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:46169->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:60794->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7597213333099240595.2787459247403390428. HINFO: read udp 10.244.0.3:45988->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-444000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-444000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=running-upgrade-444000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T10_29_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:29:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-444000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:33:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:29:07 +0000   Fri, 20 Sep 2024 17:29:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:29:07 +0000   Fri, 20 Sep 2024 17:29:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:29:07 +0000   Fri, 20 Sep 2024 17:29:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:29:07 +0000   Fri, 20 Sep 2024 17:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-444000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc272dbe359949bca7f7b3361ee83b41
	  System UUID:                fc272dbe359949bca7f7b3361ee83b41
	  Boot ID:                    292f8c47-ed4a-43f4-ae6f-e50da198116a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-w2fvs                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 coredns-6d4b75cb6d-wkdlx                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 etcd-running-upgrade-444000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-444000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-444000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-ptnd6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-running-upgrade-444000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m27s (x5 over 4m27s)  kubelet          Node running-upgrade-444000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x5 over 4m27s)  kubelet          Node running-upgrade-444000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x5 over 4m27s)  kubelet          Node running-upgrade-444000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node running-upgrade-444000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node running-upgrade-444000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node running-upgrade-444000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m21s                  kubelet          Node running-upgrade-444000 status is now: NodeReady
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m9s                   node-controller  Node running-upgrade-444000 event: Registered Node running-upgrade-444000 in Controller
	
	
	==> dmesg <==
	[  +1.604137] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.075933] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.083814] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.140770] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.081799] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.080693] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.475910] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[  +8.670628] systemd-fstab-generator[1924]: Ignoring "noauto" for root device
	[  +2.893184] systemd-fstab-generator[2192]: Ignoring "noauto" for root device
	[  +0.152297] systemd-fstab-generator[2227]: Ignoring "noauto" for root device
	[  +0.101048] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.093587] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[ +13.269457] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.208657] systemd-fstab-generator[2985]: Ignoring "noauto" for root device
	[  +0.082399] systemd-fstab-generator[2999]: Ignoring "noauto" for root device
	[  +0.076490] systemd-fstab-generator[3010]: Ignoring "noauto" for root device
	[  +0.097372] systemd-fstab-generator[3024]: Ignoring "noauto" for root device
	[  +2.312005] systemd-fstab-generator[3175]: Ignoring "noauto" for root device
	[  +3.101776] systemd-fstab-generator[3568]: Ignoring "noauto" for root device
	[  +1.371713] systemd-fstab-generator[3822]: Ignoring "noauto" for root device
	[Sep20 17:25] kauditd_printk_skb: 68 callbacks suppressed
	[ +38.967627] kauditd_printk_skb: 23 callbacks suppressed
	[Sep20 17:29] systemd-fstab-generator[11167]: Ignoring "noauto" for root device
	[  +5.643165] systemd-fstab-generator[11768]: Ignoring "noauto" for root device
	[  +0.477749] systemd-fstab-generator[11901]: Ignoring "noauto" for root device
	
	
	==> etcd [3cf5fefcb830] <==
	{"level":"info","ts":"2024-09-20T17:29:02.698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-20T17:29:02.698Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-20T17:29:02.698Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T17:29:02.693Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-20T17:29:02.698Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T17:29:02.693Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-20T17:29:02.698Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-444000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:29:03.086Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:29:03.087Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-20T17:29:03.090Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:29:03.090Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:29:03.090Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T17:29:03.094Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:29:03.094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:29:03.094Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:29:03.094Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:29:03.094Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 17:33:28 up 9 min,  0 users,  load average: 0.47, 0.40, 0.21
	Linux running-upgrade-444000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [511d63b7e1e2] <==
	I0920 17:29:04.540807       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0920 17:29:04.558931       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0920 17:29:04.558993       1 cache.go:39] Caches are synced for autoregister controller
	I0920 17:29:04.561079       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0920 17:29:04.561883       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0920 17:29:04.561925       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 17:29:04.579036       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0920 17:29:05.294761       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0920 17:29:05.467957       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0920 17:29:05.471930       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0920 17:29:05.472076       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 17:29:05.604557       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 17:29:05.618240       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 17:29:05.722933       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0920 17:29:05.725552       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0920 17:29:05.726027       1 controller.go:611] quota admission added evaluator for: endpoints
	I0920 17:29:05.728124       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:29:06.603157       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0920 17:29:07.190659       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0920 17:29:07.193812       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0920 17:29:07.212858       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0920 17:29:07.247976       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 17:29:19.605250       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0920 17:29:20.355745       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0920 17:29:21.284240       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [cffbfe00db18] <==
	I0920 17:29:19.453571       1 shared_informer.go:262] Caches are synced for namespace
	I0920 17:29:19.454717       1 shared_informer.go:262] Caches are synced for service account
	I0920 17:29:19.456876       1 shared_informer.go:262] Caches are synced for crt configmap
	I0920 17:29:19.464291       1 shared_informer.go:262] Caches are synced for PVC protection
	I0920 17:29:19.471113       1 shared_informer.go:262] Caches are synced for taint
	I0920 17:29:19.471142       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0920 17:29:19.471161       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-444000. Assuming now as a timestamp.
	I0920 17:29:19.471175       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0920 17:29:19.471205       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0920 17:29:19.471272       1 event.go:294] "Event occurred" object="running-upgrade-444000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-444000 event: Registered Node running-upgrade-444000 in Controller"
	I0920 17:29:19.608115       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ptnd6"
	I0920 17:29:19.608332       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 17:29:19.617599       1 shared_informer.go:262] Caches are synced for deployment
	I0920 17:29:19.619728       1 shared_informer.go:262] Caches are synced for disruption
	I0920 17:29:19.619737       1 disruption.go:371] Sending events to api server.
	I0920 17:29:19.659197       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 17:29:19.666423       1 shared_informer.go:262] Caches are synced for job
	I0920 17:29:19.676189       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0920 17:29:19.680532       1 shared_informer.go:262] Caches are synced for cronjob
	I0920 17:29:20.074981       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 17:29:20.105351       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 17:29:20.105391       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0920 17:29:20.357126       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0920 17:29:20.457081       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-w2fvs"
	I0920 17:29:20.462189       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wkdlx"
	
	
	==> kube-proxy [8a6f8581ccdf] <==
	I0920 17:29:21.273573       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0920 17:29:21.273597       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0920 17:29:21.273608       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0920 17:29:21.282480       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0920 17:29:21.282494       1 server_others.go:206] "Using iptables Proxier"
	I0920 17:29:21.282549       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0920 17:29:21.282683       1 server.go:661] "Version info" version="v1.24.1"
	I0920 17:29:21.282692       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:29:21.282931       1 config.go:317] "Starting service config controller"
	I0920 17:29:21.282943       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0920 17:29:21.282981       1 config.go:226] "Starting endpoint slice config controller"
	I0920 17:29:21.282988       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0920 17:29:21.283256       1 config.go:444] "Starting node config controller"
	I0920 17:29:21.283280       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0920 17:29:21.383043       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0920 17:29:21.383052       1 shared_informer.go:262] Caches are synced for service config
	I0920 17:29:21.383497       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [79c4d2dffd49] <==
	W0920 17:29:04.516397       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:29:04.516401       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0920 17:29:04.516416       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:29:04.516423       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0920 17:29:04.516439       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:29:04.516445       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0920 17:29:04.516461       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:29:04.516464       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0920 17:29:04.516480       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:29:04.516483       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0920 17:29:04.516500       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:29:04.516507       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0920 17:29:04.516523       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:29:04.516529       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0920 17:29:04.516546       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:29:04.516549       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0920 17:29:05.337573       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:29:05.337651       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0920 17:29:05.346739       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:29:05.346813       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0920 17:29:05.362426       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:29:05.362516       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0920 17:29:05.463905       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:29:05.464548       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0920 17:29:05.713019       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-09-20 17:24:08 UTC, ends at Fri 2024-09-20 17:33:28 UTC. --
	Sep 20 17:29:08 running-upgrade-444000 kubelet[11775]: E0920 17:29:08.823503   11775 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-444000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-444000"
	Sep 20 17:29:09 running-upgrade-444000 kubelet[11775]: E0920 17:29:09.023401   11775 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-444000\" already exists" pod="kube-system/etcd-running-upgrade-444000"
	Sep 20 17:29:09 running-upgrade-444000 kubelet[11775]: E0920 17:29:09.222069   11775 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-444000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-444000"
	Sep 20 17:29:09 running-upgrade-444000 kubelet[11775]: I0920 17:29:09.419378   11775 request.go:601] Waited for 1.143495745s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 20 17:29:09 running-upgrade-444000 kubelet[11775]: E0920 17:29:09.423070   11775 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-444000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-444000"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.430745   11775 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.431239   11775 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.476370   11775 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: W0920 17:29:19.477724   11775 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: E0920 17:29:19.477746   11775 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-444000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-444000' and this object
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.610525   11775 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.632810   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcg8x\" (UniqueName: \"kubernetes.io/projected/bac0d6df-4819-4e2b-95c7-15fd657a71fd-kube-api-access-qcg8x\") pod \"storage-provisioner\" (UID: \"bac0d6df-4819-4e2b-95c7-15fd657a71fd\") " pod="kube-system/storage-provisioner"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.632846   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bac0d6df-4819-4e2b-95c7-15fd657a71fd-tmp\") pod \"storage-provisioner\" (UID: \"bac0d6df-4819-4e2b-95c7-15fd657a71fd\") " pod="kube-system/storage-provisioner"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.735894   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b65aae82-9115-4e7e-9595-3d7874cec726-kube-proxy\") pod \"kube-proxy-ptnd6\" (UID: \"b65aae82-9115-4e7e-9595-3d7874cec726\") " pod="kube-system/kube-proxy-ptnd6"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.736001   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b65aae82-9115-4e7e-9595-3d7874cec726-lib-modules\") pod \"kube-proxy-ptnd6\" (UID: \"b65aae82-9115-4e7e-9595-3d7874cec726\") " pod="kube-system/kube-proxy-ptnd6"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.736017   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rdhl\" (UniqueName: \"kubernetes.io/projected/b65aae82-9115-4e7e-9595-3d7874cec726-kube-api-access-7rdhl\") pod \"kube-proxy-ptnd6\" (UID: \"b65aae82-9115-4e7e-9595-3d7874cec726\") " pod="kube-system/kube-proxy-ptnd6"
	Sep 20 17:29:19 running-upgrade-444000 kubelet[11775]: I0920 17:29:19.736034   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b65aae82-9115-4e7e-9595-3d7874cec726-xtables-lock\") pod \"kube-proxy-ptnd6\" (UID: \"b65aae82-9115-4e7e-9595-3d7874cec726\") " pod="kube-system/kube-proxy-ptnd6"
	Sep 20 17:29:20 running-upgrade-444000 kubelet[11775]: I0920 17:29:20.459488   11775 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:29:20 running-upgrade-444000 kubelet[11775]: I0920 17:29:20.464157   11775 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:29:20 running-upgrade-444000 kubelet[11775]: I0920 17:29:20.644113   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/627e19be-1f36-4256-9606-7b547dc5f66a-config-volume\") pod \"coredns-6d4b75cb6d-w2fvs\" (UID: \"627e19be-1f36-4256-9606-7b547dc5f66a\") " pod="kube-system/coredns-6d4b75cb6d-w2fvs"
	Sep 20 17:29:20 running-upgrade-444000 kubelet[11775]: I0920 17:29:20.644137   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdzv\" (UniqueName: \"kubernetes.io/projected/627e19be-1f36-4256-9606-7b547dc5f66a-kube-api-access-xtdzv\") pod \"coredns-6d4b75cb6d-w2fvs\" (UID: \"627e19be-1f36-4256-9606-7b547dc5f66a\") " pod="kube-system/coredns-6d4b75cb6d-w2fvs"
	Sep 20 17:29:20 running-upgrade-444000 kubelet[11775]: I0920 17:29:20.644149   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crgmz\" (UniqueName: \"kubernetes.io/projected/2777acf5-d56d-47dc-b679-66157d024697-kube-api-access-crgmz\") pod \"coredns-6d4b75cb6d-wkdlx\" (UID: \"2777acf5-d56d-47dc-b679-66157d024697\") " pod="kube-system/coredns-6d4b75cb6d-wkdlx"
	Sep 20 17:29:20 running-upgrade-444000 kubelet[11775]: I0920 17:29:20.644164   11775 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2777acf5-d56d-47dc-b679-66157d024697-config-volume\") pod \"coredns-6d4b75cb6d-wkdlx\" (UID: \"2777acf5-d56d-47dc-b679-66157d024697\") " pod="kube-system/coredns-6d4b75cb6d-wkdlx"
	Sep 20 17:33:09 running-upgrade-444000 kubelet[11775]: I0920 17:33:09.430501   11775 scope.go:110] "RemoveContainer" containerID="f9b4c92961ad44dbede4bc33fb21bb4967b525fc7975abd297f45c450af83cd1"
	Sep 20 17:33:09 running-upgrade-444000 kubelet[11775]: I0920 17:33:09.444734   11775 scope.go:110] "RemoveContainer" containerID="a9ee06323540b8c3c1a1a37a25bd130687ee05118f79d8edb6aada61a8db1175"
	
	
	==> storage-provisioner [8a4f38f0255e] <==
	I0920 17:29:21.232251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:29:21.239172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:29:21.239224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:29:21.243585       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:29:21.243668       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-444000_5465405c-b91a-4298-8e13-e836c86a79be!
	I0920 17:29:21.244067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"08a594aa-7030-4d98-a32d-8e776e77eae2", APIVersion:"v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-444000_5465405c-b91a-4298-8e13-e836c86a79be became leader
	I0920 17:29:21.344353       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-444000_5465405c-b91a-4298-8e13-e836c86a79be!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-444000 -n running-upgrade-444000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-444000 -n running-upgrade-444000: exit status 2 (15.743723167s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-444000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-444000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-444000
--- FAIL: TestRunningBinaryUpgrade (600.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-142000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-142000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.852127083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-142000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-142000" primary control-plane node in "kubernetes-upgrade-142000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-142000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:26:43.838022    4321 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:26:43.838162    4321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:26:43.838167    4321 out.go:358] Setting ErrFile to fd 2...
	I0920 10:26:43.838170    4321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:26:43.838298    4321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:26:43.839341    4321 out.go:352] Setting JSON to false
	I0920 10:26:43.855737    4321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3366,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:26:43.855819    4321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:26:43.862238    4321 out.go:177] * [kubernetes-upgrade-142000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:26:43.870191    4321 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:26:43.870254    4321 notify.go:220] Checking for updates...
	I0920 10:26:43.876108    4321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:26:43.879216    4321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:26:43.882172    4321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:26:43.885177    4321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:26:43.888168    4321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:26:43.891535    4321 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:26:43.891603    4321 config.go:182] Loaded profile config "running-upgrade-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:26:43.891650    4321 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:26:43.896108    4321 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:26:43.903232    4321 start.go:297] selected driver: qemu2
	I0920 10:26:43.903241    4321 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:26:43.903248    4321 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:26:43.905655    4321 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:26:43.909074    4321 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:26:43.912188    4321 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:26:43.912201    4321 cni.go:84] Creating CNI manager for ""
	I0920 10:26:43.912234    4321 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:26:43.912269    4321 start.go:340] cluster config:
	{Name:kubernetes-upgrade-142000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:26:43.915772    4321 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:26:43.923076    4321 out.go:177] * Starting "kubernetes-upgrade-142000" primary control-plane node in "kubernetes-upgrade-142000" cluster
	I0920 10:26:43.927135    4321 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:26:43.927151    4321 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:26:43.927159    4321 cache.go:56] Caching tarball of preloaded images
	I0920 10:26:43.927222    4321 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:26:43.927227    4321 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:26:43.927289    4321 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/kubernetes-upgrade-142000/config.json ...
	I0920 10:26:43.927301    4321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/kubernetes-upgrade-142000/config.json: {Name:mkdbe06d983a3c0f081bc72ac39d8dff7f60534e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:26:43.927672    4321 start.go:360] acquireMachinesLock for kubernetes-upgrade-142000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:26:43.927717    4321 start.go:364] duration metric: took 34.709µs to acquireMachinesLock for "kubernetes-upgrade-142000"
	I0920 10:26:43.927749    4321 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-142000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:26:43.927773    4321 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:26:43.935131    4321 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:26:43.951342    4321 start.go:159] libmachine.API.Create for "kubernetes-upgrade-142000" (driver="qemu2")
	I0920 10:26:43.951371    4321 client.go:168] LocalClient.Create starting
	I0920 10:26:43.951449    4321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:26:43.951481    4321 main.go:141] libmachine: Decoding PEM data...
	I0920 10:26:43.951489    4321 main.go:141] libmachine: Parsing certificate...
	I0920 10:26:43.951528    4321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:26:43.951554    4321 main.go:141] libmachine: Decoding PEM data...
	I0920 10:26:43.951564    4321 main.go:141] libmachine: Parsing certificate...
	I0920 10:26:43.952042    4321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:26:44.120113    4321 main.go:141] libmachine: Creating SSH key...
	I0920 10:26:44.241447    4321 main.go:141] libmachine: Creating Disk image...
	I0920 10:26:44.241454    4321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:26:44.241645    4321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2
	I0920 10:26:44.250777    4321 main.go:141] libmachine: STDOUT: 
	I0920 10:26:44.250799    4321 main.go:141] libmachine: STDERR: 
	I0920 10:26:44.250853    4321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2 +20000M
	I0920 10:26:44.258676    4321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:26:44.258695    4321 main.go:141] libmachine: STDERR: 
	I0920 10:26:44.258715    4321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2
	I0920 10:26:44.258719    4321 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:26:44.258732    4321 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:26:44.258758    4321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:d6:e0:d8:73:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2
	I0920 10:26:44.260322    4321 main.go:141] libmachine: STDOUT: 
	I0920 10:26:44.260337    4321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:26:44.260356    4321 client.go:171] duration metric: took 308.987542ms to LocalClient.Create
	I0920 10:26:46.262423    4321 start.go:128] duration metric: took 2.334706125s to createHost
	I0920 10:26:46.262445    4321 start.go:83] releasing machines lock for "kubernetes-upgrade-142000", held for 2.334787667s
	W0920 10:26:46.262481    4321 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:26:46.271005    4321 out.go:177] * Deleting "kubernetes-upgrade-142000" in qemu2 ...
	W0920 10:26:46.288541    4321 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:26:46.288547    4321 start.go:729] Will try again in 5 seconds ...
	I0920 10:26:51.289438    4321 start.go:360] acquireMachinesLock for kubernetes-upgrade-142000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:26:51.290042    4321 start.go:364] duration metric: took 495.041µs to acquireMachinesLock for "kubernetes-upgrade-142000"
	I0920 10:26:51.290189    4321 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-142000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:26:51.290391    4321 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:26:51.309012    4321 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:26:51.353587    4321 start.go:159] libmachine.API.Create for "kubernetes-upgrade-142000" (driver="qemu2")
	I0920 10:26:51.353639    4321 client.go:168] LocalClient.Create starting
	I0920 10:26:51.353754    4321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:26:51.353825    4321 main.go:141] libmachine: Decoding PEM data...
	I0920 10:26:51.353840    4321 main.go:141] libmachine: Parsing certificate...
	I0920 10:26:51.353901    4321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:26:51.353947    4321 main.go:141] libmachine: Decoding PEM data...
	I0920 10:26:51.353959    4321 main.go:141] libmachine: Parsing certificate...
	I0920 10:26:51.354431    4321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:26:51.529740    4321 main.go:141] libmachine: Creating SSH key...
	I0920 10:26:51.591490    4321 main.go:141] libmachine: Creating Disk image...
	I0920 10:26:51.591498    4321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:26:51.591721    4321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2
	I0920 10:26:51.600977    4321 main.go:141] libmachine: STDOUT: 
	I0920 10:26:51.600999    4321 main.go:141] libmachine: STDERR: 
	I0920 10:26:51.601068    4321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2 +20000M
	I0920 10:26:51.609078    4321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:26:51.609096    4321 main.go:141] libmachine: STDERR: 
	I0920 10:26:51.609113    4321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2
	I0920 10:26:51.609119    4321 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:26:51.609129    4321 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:26:51.609158    4321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b7:65:de:0d:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2
	I0920 10:26:51.610944    4321 main.go:141] libmachine: STDOUT: 
	I0920 10:26:51.610961    4321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:26:51.610975    4321 client.go:171] duration metric: took 257.33825ms to LocalClient.Create
	I0920 10:26:53.613134    4321 start.go:128] duration metric: took 2.322765542s to createHost
	I0920 10:26:53.613248    4321 start.go:83] releasing machines lock for "kubernetes-upgrade-142000", held for 2.323244958s
	W0920 10:26:53.613617    4321 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-142000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-142000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:26:53.628281    4321 out.go:201] 
	W0920 10:26:53.631490    4321 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:26:53.631524    4321 out.go:270] * 
	* 
	W0920 10:26:53.633641    4321 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:26:53.647267    4321 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-142000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-142000
E0920 10:26:55.633300    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-142000: (3.086431417s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-142000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-142000 status --format={{.Host}}: exit status 7 (39.586375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-142000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
E0920 10:26:59.305696    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-142000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.175589125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-142000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-142000" primary control-plane node in "kubernetes-upgrade-142000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-142000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-142000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:26:56.817741    4360 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:26:56.817942    4360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:26:56.817946    4360 out.go:358] Setting ErrFile to fd 2...
	I0920 10:26:56.817948    4360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:26:56.818072    4360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:26:56.819114    4360 out.go:352] Setting JSON to false
	I0920 10:26:56.835701    4360 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3379,"bootTime":1726849837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:26:56.835771    4360 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:26:56.840412    4360 out.go:177] * [kubernetes-upgrade-142000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:26:56.847374    4360 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:26:56.847445    4360 notify.go:220] Checking for updates...
	I0920 10:26:56.855294    4360 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:26:56.858392    4360 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:26:56.862242    4360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:26:56.865428    4360 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:26:56.868352    4360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:26:56.871696    4360 config.go:182] Loaded profile config "kubernetes-upgrade-142000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 10:26:56.871973    4360 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:26:56.876318    4360 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:26:56.883376    4360 start.go:297] selected driver: qemu2
	I0920 10:26:56.883384    4360 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-142000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:26:56.883444    4360 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:26:56.885740    4360 cni.go:84] Creating CNI manager for ""
	I0920 10:26:56.885781    4360 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:26:56.885809    4360 start.go:340] cluster config:
	{Name:kubernetes-upgrade-142000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-142000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:26:56.889195    4360 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:26:56.897388    4360 out.go:177] * Starting "kubernetes-upgrade-142000" primary control-plane node in "kubernetes-upgrade-142000" cluster
	I0920 10:26:56.901366    4360 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:26:56.901390    4360 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:26:56.901398    4360 cache.go:56] Caching tarball of preloaded images
	I0920 10:26:56.901460    4360 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:26:56.901466    4360 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:26:56.901536    4360 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/kubernetes-upgrade-142000/config.json ...
	I0920 10:26:56.901983    4360 start.go:360] acquireMachinesLock for kubernetes-upgrade-142000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:26:56.902012    4360 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "kubernetes-upgrade-142000"
	I0920 10:26:56.902020    4360 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:26:56.902024    4360 fix.go:54] fixHost starting: 
	I0920 10:26:56.902147    4360 fix.go:112] recreateIfNeeded on kubernetes-upgrade-142000: state=Stopped err=<nil>
	W0920 10:26:56.902155    4360 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:26:56.906421    4360 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-142000" ...
	I0920 10:26:56.914346    4360 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:26:56.914378    4360 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b7:65:de:0d:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2
	I0920 10:26:56.916240    4360 main.go:141] libmachine: STDOUT: 
	I0920 10:26:56.916260    4360 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:26:56.916283    4360 fix.go:56] duration metric: took 14.258084ms for fixHost
	I0920 10:26:56.916288    4360 start.go:83] releasing machines lock for "kubernetes-upgrade-142000", held for 14.272667ms
	W0920 10:26:56.916294    4360 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:26:56.916324    4360 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:26:56.916329    4360 start.go:729] Will try again in 5 seconds ...
	I0920 10:27:01.918254    4360 start.go:360] acquireMachinesLock for kubernetes-upgrade-142000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:27:01.918365    4360 start.go:364] duration metric: took 88.541µs to acquireMachinesLock for "kubernetes-upgrade-142000"
	I0920 10:27:01.918385    4360 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:27:01.918388    4360 fix.go:54] fixHost starting: 
	I0920 10:27:01.918535    4360 fix.go:112] recreateIfNeeded on kubernetes-upgrade-142000: state=Stopped err=<nil>
	W0920 10:27:01.918541    4360 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:27:01.922751    4360 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-142000" ...
	I0920 10:27:01.929630    4360 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:27:01.929662    4360 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b7:65:de:0d:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubernetes-upgrade-142000/disk.qcow2
	I0920 10:27:01.931865    4360 main.go:141] libmachine: STDOUT: 
	I0920 10:27:01.931887    4360 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:27:01.931906    4360 fix.go:56] duration metric: took 13.517875ms for fixHost
	I0920 10:27:01.931910    4360 start.go:83] releasing machines lock for "kubernetes-upgrade-142000", held for 13.538083ms
	W0920 10:27:01.931953    4360 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-142000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-142000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:27:01.939614    4360 out.go:201] 
	W0920 10:27:01.943752    4360 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:27:01.943760    4360 out.go:270] * 
	* 
	W0920 10:27:01.944254    4360 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:27:01.955631    4360 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-142000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-142000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-142000 version --output=json: exit status 1 (27.937042ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-142000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-20 10:27:01.993308 -0700 PDT m=+2624.296530709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-142000 -n kubernetes-upgrade-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-142000 -n kubernetes-upgrade-142000: exit status 7 (30.871459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-142000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-142000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-142000
--- FAIL: TestKubernetesUpgrade (18.29s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.76s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19672
- KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3075169254/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.76s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.89s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19672
- KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2493903290/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2406415083 start -p stopped-upgrade-593000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2406415083 start -p stopped-upgrade-593000 --memory=2200 --vm-driver=qemu2 : (39.864625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2406415083 -p stopped-upgrade-593000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2406415083 -p stopped-upgrade-593000 stop: (12.127228166s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-593000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0920 10:31:55.624325    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:31:59.296965    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-593000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.805816708s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-593000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-593000" primary control-plane node in "stopped-upgrade-593000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-593000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:27:55.469599    4398 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:27:55.469766    4398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:27:55.469771    4398 out.go:358] Setting ErrFile to fd 2...
	I0920 10:27:55.469774    4398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:27:55.469950    4398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:27:55.471110    4398 out.go:352] Setting JSON to false
	I0920 10:27:55.491331    4398 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3438,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:27:55.491403    4398 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:27:55.497059    4398 out.go:177] * [stopped-upgrade-593000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:27:55.504910    4398 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:27:55.504960    4398 notify.go:220] Checking for updates...
	I0920 10:27:55.513038    4398 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:27:55.516067    4398 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:27:55.519025    4398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:27:55.522064    4398 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:27:55.524988    4398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:27:55.528344    4398 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:27:55.532031    4398 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:27:55.533446    4398 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:27:55.538012    4398 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:27:55.544889    4398 start.go:297] selected driver: qemu2
	I0920 10:27:55.544895    4398 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50520 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:27:55.544942    4398 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:27:55.547660    4398 cni.go:84] Creating CNI manager for ""
	I0920 10:27:55.547691    4398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:27:55.547716    4398 start.go:340] cluster config:
	{Name:stopped-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50520 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:27:55.547763    4398 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:27:55.556022    4398 out.go:177] * Starting "stopped-upgrade-593000" primary control-plane node in "stopped-upgrade-593000" cluster
	I0920 10:27:55.560016    4398 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:27:55.560046    4398 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:27:55.560055    4398 cache.go:56] Caching tarball of preloaded images
	I0920 10:27:55.560148    4398 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:27:55.560154    4398 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:27:55.560219    4398 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/config.json ...
	I0920 10:27:55.560708    4398 start.go:360] acquireMachinesLock for stopped-upgrade-593000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:27:55.560744    4398 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "stopped-upgrade-593000"
	I0920 10:27:55.560754    4398 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:27:55.560758    4398 fix.go:54] fixHost starting: 
	I0920 10:27:55.560873    4398 fix.go:112] recreateIfNeeded on stopped-upgrade-593000: state=Stopped err=<nil>
	W0920 10:27:55.560882    4398 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:27:55.564031    4398 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-593000" ...
	I0920 10:27:55.572016    4398 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:27:55.572090    4398 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50485-:22,hostfwd=tcp::50486-:2376,hostname=stopped-upgrade-593000 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/disk.qcow2
	I0920 10:27:55.620686    4398 main.go:141] libmachine: STDOUT: 
	I0920 10:27:55.620718    4398 main.go:141] libmachine: STDERR: 
	I0920 10:27:55.620731    4398 main.go:141] libmachine: Waiting for VM to start (ssh -p 50485 docker@127.0.0.1)...
	I0920 10:28:15.387373    4398 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/config.json ...
	I0920 10:28:15.388100    4398 machine.go:93] provisionDockerMachine start ...
	I0920 10:28:15.388250    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.388586    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.388601    4398 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:28:15.480009    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 10:28:15.480042    4398 buildroot.go:166] provisioning hostname "stopped-upgrade-593000"
	I0920 10:28:15.480151    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.480372    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.480383    4398 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-593000 && echo "stopped-upgrade-593000" | sudo tee /etc/hostname
	I0920 10:28:15.568909    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-593000
	
	I0920 10:28:15.568999    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.569182    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.569198    4398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-593000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-593000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-593000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:28:15.651198    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:28:15.651213    4398 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19672-1143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19672-1143/.minikube}
	I0920 10:28:15.651230    4398 buildroot.go:174] setting up certificates
	I0920 10:28:15.651236    4398 provision.go:84] configureAuth start
	I0920 10:28:15.651241    4398 provision.go:143] copyHostCerts
	I0920 10:28:15.651339    4398 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem, removing ...
	I0920 10:28:15.651347    4398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem
	I0920 10:28:15.651873    4398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.pem (1078 bytes)
	I0920 10:28:15.652100    4398 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem, removing ...
	I0920 10:28:15.652104    4398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem
	I0920 10:28:15.652165    4398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/cert.pem (1123 bytes)
	I0920 10:28:15.652306    4398 exec_runner.go:144] found /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem, removing ...
	I0920 10:28:15.652310    4398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem
	I0920 10:28:15.652362    4398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19672-1143/.minikube/key.pem (1679 bytes)
	I0920 10:28:15.652464    4398 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-593000 san=[127.0.0.1 localhost minikube stopped-upgrade-593000]
	I0920 10:28:15.768179    4398 provision.go:177] copyRemoteCerts
	I0920 10:28:15.768223    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:28:15.768233    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:28:15.807043    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:28:15.814426    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:28:15.821675    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 10:28:15.828554    4398 provision.go:87] duration metric: took 177.317291ms to configureAuth
	I0920 10:28:15.828563    4398 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:28:15.828681    4398 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:28:15.828725    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.828817    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.828822    4398 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:28:15.899822    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:28:15.899832    4398 buildroot.go:70] root file system type: tmpfs
	I0920 10:28:15.899884    4398 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:28:15.899934    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.900069    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.900102    4398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:28:15.977063    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:28:15.977135    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:15.977245    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:15.977255    4398 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:28:16.322021    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 10:28:16.322034    4398 machine.go:96] duration metric: took 933.949292ms to provisionDockerMachine
	I0920 10:28:16.322040    4398 start.go:293] postStartSetup for "stopped-upgrade-593000" (driver="qemu2")
	I0920 10:28:16.322047    4398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:28:16.322103    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:28:16.322114    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:28:16.361166    4398 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:28:16.362431    4398 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:28:16.362439    4398 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/addons for local assets ...
	I0920 10:28:16.362522    4398 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-1143/.minikube/files for local assets ...
	I0920 10:28:16.362865    4398 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0920 10:28:16.363000    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:28:16.365852    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0920 10:28:16.372420    4398 start.go:296] duration metric: took 50.375833ms for postStartSetup
	I0920 10:28:16.372434    4398 fix.go:56] duration metric: took 20.812254167s for fixHost
	I0920 10:28:16.372473    4398 main.go:141] libmachine: Using SSH client type: native
	I0920 10:28:16.372577    4398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c39c00] 0x100c3c440 <nil>  [] 0s} localhost 50485 <nil> <nil>}
	I0920 10:28:16.372582    4398 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:28:16.446827    4398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726853296.067055879
	
	I0920 10:28:16.446836    4398 fix.go:216] guest clock: 1726853296.067055879
	I0920 10:28:16.446841    4398 fix.go:229] Guest: 2024-09-20 10:28:16.067055879 -0700 PDT Remote: 2024-09-20 10:28:16.372436 -0700 PDT m=+20.934048501 (delta=-305.380121ms)
	I0920 10:28:16.446852    4398 fix.go:200] guest clock delta is within tolerance: -305.380121ms
	I0920 10:28:16.446855    4398 start.go:83] releasing machines lock for "stopped-upgrade-593000", held for 20.88668675s
	I0920 10:28:16.446933    4398 ssh_runner.go:195] Run: cat /version.json
	I0920 10:28:16.446946    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:28:16.446933    4398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:28:16.446995    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	W0920 10:28:16.447513    4398 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50485: connect: connection refused
	I0920 10:28:16.447534    4398 retry.go:31] will retry after 350.539855ms: dial tcp [::1]:50485: connect: connection refused
	W0920 10:28:16.483662    4398 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:28:16.483712    4398 ssh_runner.go:195] Run: systemctl --version
	I0920 10:28:16.485799    4398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:28:16.487364    4398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:28:16.487395    4398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:28:16.490414    4398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:28:16.494872    4398 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:28:16.494880    4398 start.go:495] detecting cgroup driver to use...
	I0920 10:28:16.494961    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:28:16.502392    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:28:16.505536    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:28:16.508434    4398 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:28:16.508461    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:28:16.511571    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:28:16.514983    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:28:16.518518    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:28:16.521457    4398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:28:16.524207    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:28:16.527491    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:28:16.536119    4398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:28:16.540917    4398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:28:16.543970    4398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:28:16.546924    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:16.623484    4398 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:28:16.634072    4398 start.go:495] detecting cgroup driver to use...
	I0920 10:28:16.634149    4398 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:28:16.640947    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:28:16.645838    4398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:28:16.654308    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:28:16.659097    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:28:16.663738    4398 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 10:28:16.703527    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:28:16.708546    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:28:16.713985    4398 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:28:16.715217    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:28:16.717804    4398 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:28:16.722613    4398 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:28:16.786761    4398 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:28:16.866407    4398 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:28:16.866476    4398 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:28:16.871698    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:16.932909    4398 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:28:18.083256    4398 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15035225s)
	I0920 10:28:18.083336    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:28:18.087800    4398 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:28:18.093754    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:28:18.098470    4398 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:28:18.162392    4398 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:28:18.226550    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:18.290557    4398 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:28:18.296692    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:28:18.301440    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:18.367070    4398 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:28:18.407286    4398 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:28:18.407385    4398 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:28:18.410459    4398 start.go:563] Will wait 60s for crictl version
	I0920 10:28:18.410525    4398 ssh_runner.go:195] Run: which crictl
	I0920 10:28:18.411939    4398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:28:18.426258    4398 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:28:18.426334    4398 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:28:18.444505    4398 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:28:18.461638    4398 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:28:18.461778    4398 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:28:18.463096    4398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:28:18.467204    4398 kubeadm.go:883] updating cluster {Name:stopped-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50520 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:28:18.467248    4398 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:28:18.467293    4398 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:28:18.478842    4398 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:28:18.478851    4398 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:28:18.478912    4398 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:28:18.482263    4398 ssh_runner.go:195] Run: which lz4
	I0920 10:28:18.483552    4398 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:28:18.484888    4398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:28:18.484898    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:28:19.374919    4398 docker.go:649] duration metric: took 891.433791ms to copy over tarball
	I0920 10:28:19.374984    4398 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:28:20.543103    4398 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168134333s)
	I0920 10:28:20.543117    4398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:28:20.560063    4398 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:28:20.563630    4398 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:28:20.569342    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:20.633082    4398 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:28:22.070107    4398 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.437046958s)
	I0920 10:28:22.070220    4398 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:28:22.080974    4398 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:28:22.080984    4398 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:28:22.080990    4398 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:28:22.086691    4398 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:22.088598    4398 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.089968    4398 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:28:22.089981    4398 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:22.091402    4398 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.091459    4398 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.092901    4398 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:28:22.092917    4398 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:28:22.094167    4398 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.094177    4398 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.095575    4398 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.095591    4398 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:28:22.096646    4398 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.096694    4398 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.097629    4398 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.098322    4398 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.509241    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.526844    4398 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:28:22.526871    4398 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.526934    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:28:22.537160    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:28:22.544348    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.544448    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:28:22.549091    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:28:22.562510    4398 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:28:22.562538    4398 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.562612    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:28:22.565047    4398 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:28:22.565065    4398 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:28:22.565110    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:28:22.567113    4398 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:28:22.567126    4398 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:28:22.567173    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0920 10:28:22.578070    4398 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:28:22.578231    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.584949    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0920 10:28:22.585002    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:28:22.585135    4398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:28:22.586165    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.589313    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:28:22.589418    4398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:28:22.597820    4398 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:28:22.597836    4398 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:28:22.597842    4398 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.597864    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:28:22.597906    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:28:22.602387    4398 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:28:22.602397    4398 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:28:22.602415    4398 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.602428    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0920 10:28:22.602482    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:28:22.616925    4398 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:28:22.616939    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0920 10:28:22.617116    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.619844    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:28:22.619975    4398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:28:22.630226    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:28:22.666013    4398 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0920 10:28:22.666062    4398 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:28:22.666066    4398 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:28:22.666083    4398 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.666090    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:28:22.666140    4398 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:28:22.695270    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:28:22.780404    4398 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:28:22.780456    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0920 10:28:22.909249    4398 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 10:28:22.936111    4398 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:28:22.936128    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0920 10:28:23.073192    4398 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0920 10:28:23.079184    4398 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:28:23.079302    4398 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:23.089645    4398 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:28:23.089670    4398 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:23.089739    4398 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:28:23.102932    4398 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:28:23.103070    4398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:28:23.104510    4398 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:28:23.104522    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:28:23.132695    4398 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:28:23.132708    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:28:23.360569    4398 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:28:23.360607    4398 cache_images.go:92] duration metric: took 1.279645458s to LoadCachedImages
	W0920 10:28:23.360644    4398 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0920 10:28:23.360654    4398 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:28:23.360696    4398 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-593000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:28:23.360774    4398 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:28:23.374428    4398 cni.go:84] Creating CNI manager for ""
	I0920 10:28:23.374441    4398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:28:23.374447    4398 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:28:23.374456    4398 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-593000 NodeName:stopped-upgrade-593000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:28:23.374513    4398 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-593000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:28:23.374582    4398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:28:23.378173    4398 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:28:23.378216    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:28:23.380860    4398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:28:23.385631    4398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:28:23.390464    4398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:28:23.395919    4398 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:28:23.397203    4398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:28:23.400387    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:28:23.467194    4398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:28:23.472954    4398 certs.go:68] Setting up /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000 for IP: 10.0.2.15
	I0920 10:28:23.472969    4398 certs.go:194] generating shared ca certs ...
	I0920 10:28:23.472978    4398 certs.go:226] acquiring lock for ca certs: {Name:mk7151e0388cf18b174fabc4929e6178a41b4c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:28:23.473141    4398 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key
	I0920 10:28:23.473190    4398 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key
	I0920 10:28:23.473196    4398 certs.go:256] generating profile certs ...
	I0920 10:28:23.473254    4398 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.key
	I0920 10:28:23.473273    4398 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key.84cad731
	I0920 10:28:23.473284    4398 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt.84cad731 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:28:23.523351    4398 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt.84cad731 ...
	I0920 10:28:23.523367    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt.84cad731: {Name:mk33e1c515dcd1dcd2322b493212597c9529e282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:28:23.524004    4398 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key.84cad731 ...
	I0920 10:28:23.524014    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key.84cad731: {Name:mkaa29a25453276623c6265144807dca9cb38e64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:28:23.524164    4398 certs.go:381] copying /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt.84cad731 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt
	I0920 10:28:23.524306    4398 certs.go:385] copying /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key.84cad731 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key
	I0920 10:28:23.524470    4398 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/proxy-client.key
	I0920 10:28:23.524610    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679.pem (1338 bytes)
	W0920 10:28:23.524642    4398 certs.go:480] ignoring /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0920 10:28:23.524646    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 10:28:23.524668    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:28:23.524692    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:28:23.524712    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/key.pem (1679 bytes)
	I0920 10:28:23.524749    4398 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0920 10:28:23.525068    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:28:23.531795    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 10:28:23.539324    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:28:23.546829    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 10:28:23.553988    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:28:23.560909    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 10:28:23.567687    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:28:23.575185    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 10:28:23.582702    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:28:23.589832    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0920 10:28:23.596634    4398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0920 10:28:23.603298    4398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:28:23.609449    4398 ssh_runner.go:195] Run: openssl version
	I0920 10:28:23.611268    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0920 10:28:23.614818    4398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0920 10:28:23.616170    4398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 16:59 /usr/share/ca-certificates/16792.pem
	I0920 10:28:23.616194    4398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0920 10:28:23.618044    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:28:23.620942    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:28:23.623891    4398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:28:23.625427    4398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:28:23.625448    4398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:28:23.627108    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:28:23.630557    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0920 10:28:23.633522    4398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0920 10:28:23.634808    4398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 16:59 /usr/share/ca-certificates/1679.pem
	I0920 10:28:23.634827    4398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0920 10:28:23.636607    4398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0920 10:28:23.639705    4398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:28:23.641173    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:28:23.642954    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:28:23.644810    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:28:23.646712    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:28:23.648648    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:28:23.650408    4398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:28:23.652242    4398 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50520 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:28:23.652308    4398 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:28:23.663074    4398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:28:23.666885    4398 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:28:23.666896    4398 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:28:23.666924    4398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:28:23.670060    4398 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:28:23.670359    4398 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-593000" does not appear in /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:28:23.670453    4398 kubeconfig.go:62] /Users/jenkins/minikube-integration/19672-1143/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-593000" cluster setting kubeconfig missing "stopped-upgrade-593000" context setting]
	I0920 10:28:23.670648    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:28:23.671361    4398 kapi.go:59] client config for stopped-upgrade-593000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.key", CAFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102212030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:28:23.671692    4398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:28:23.675007    4398 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-593000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:28:23.675012    4398 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:28:23.675060    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:28:23.685586    4398 docker.go:483] Stopping containers: [9b6d0dc7f9bd 39a78cefa13d 53b2e9135faf 03d7ed98fdba a0db8e235df0 9887b6c9112a 89f47a36713c 425892479a5b]
	I0920 10:28:23.685661    4398 ssh_runner.go:195] Run: docker stop 9b6d0dc7f9bd 39a78cefa13d 53b2e9135faf 03d7ed98fdba a0db8e235df0 9887b6c9112a 89f47a36713c 425892479a5b
	I0920 10:28:23.695906    4398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:28:23.701461    4398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:28:23.704171    4398 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:28:23.704180    4398 kubeadm.go:157] found existing configuration files:
	
	I0920 10:28:23.704212    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/admin.conf
	I0920 10:28:23.706978    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:28:23.707012    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:28:23.710134    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/kubelet.conf
	I0920 10:28:23.712528    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:28:23.712557    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:28:23.715275    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/controller-manager.conf
	I0920 10:28:23.718280    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:28:23.718303    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:28:23.721020    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/scheduler.conf
	I0920 10:28:23.723577    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:28:23.723602    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:28:23.726564    4398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:28:23.729326    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:23.754367    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:24.152587    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:24.269776    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:24.291878    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:28:24.317235    4398 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:28:24.317325    4398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:28:24.819348    4398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:28:25.318958    4398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:28:25.322867    4398 api_server.go:72] duration metric: took 1.005661917s to wait for apiserver process to appear ...
	I0920 10:28:25.322877    4398 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:28:25.322885    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:30.324866    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:30.324934    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:35.325562    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:35.325668    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:40.326570    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:40.326676    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:45.328039    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:45.328085    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:50.329399    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:50.329501    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:28:55.331459    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:28:55.331503    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:00.333604    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:00.333628    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:05.335715    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:05.335773    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:10.338052    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:10.338093    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:15.338748    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:15.338799    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:20.340986    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:20.341007    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:25.343065    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:25.343375    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:25.368738    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:25.368869    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:25.385841    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:25.385947    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:25.403803    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:25.403885    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:25.414654    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:25.414741    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:25.425368    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:25.425450    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:25.435945    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:25.436024    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:25.446196    4398 logs.go:276] 0 containers: []
	W0920 10:29:25.446213    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:25.446294    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:25.457049    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:25.457068    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:25.457074    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:25.461345    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:25.461365    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:25.476016    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:25.476026    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:25.515540    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:25.515551    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:25.534356    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:25.534369    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:25.545684    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:25.545695    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:25.557383    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:25.557398    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:25.575606    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:25.575616    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:25.586933    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:25.586948    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:25.598414    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:25.598426    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:25.673908    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:25.673919    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:25.688039    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:25.688050    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:25.701999    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:25.702011    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:25.714343    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:25.714355    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:25.755743    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:25.755755    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:25.766636    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:25.766647    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:25.782711    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:25.782721    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:28.310454    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:33.311339    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:33.311466    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:33.322506    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:33.322597    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:33.333260    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:33.333338    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:33.344161    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:33.344245    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:33.354699    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:33.354796    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:33.365649    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:33.365750    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:33.376860    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:33.376946    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:33.386895    4398 logs.go:276] 0 containers: []
	W0920 10:29:33.386907    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:33.386971    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:33.398180    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:33.398201    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:33.398208    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:33.436828    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:33.436843    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:33.441629    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:33.441637    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:33.478010    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:33.478021    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:33.504627    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:33.504642    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:33.516771    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:33.516784    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:33.555406    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:33.555417    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:33.570047    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:33.570057    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:33.585595    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:33.585606    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:33.597599    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:33.597610    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:33.615256    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:33.615271    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:33.629172    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:33.629182    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:33.640694    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:33.640706    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:33.652273    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:33.652284    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:33.665740    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:33.665751    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:33.683229    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:33.683244    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:33.694532    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:33.694543    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:36.206190    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:41.208453    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:41.208638    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:41.221131    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:41.221225    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:41.232343    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:41.232429    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:41.251776    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:41.251885    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:41.262272    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:41.262347    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:41.272564    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:41.272645    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:41.282953    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:41.283035    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:41.295261    4398 logs.go:276] 0 containers: []
	W0920 10:29:41.295271    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:41.295332    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:41.306294    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:41.306311    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:41.306316    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:41.343270    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:41.343279    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:41.357618    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:41.357629    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:41.396162    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:41.396173    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:41.410694    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:41.410710    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:41.430273    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:41.430287    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:41.441565    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:41.441578    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:41.456734    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:41.456745    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:41.468813    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:41.468824    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:41.473333    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:41.473340    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:41.509117    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:41.509129    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:41.520756    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:41.520769    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:41.533169    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:41.533180    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:41.545183    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:41.545195    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:41.558974    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:41.558985    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:41.576798    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:41.576808    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:41.589371    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:41.589382    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:44.116195    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:49.116383    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:49.116537    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:49.127742    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:49.127826    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:49.137907    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:49.137998    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:49.148318    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:49.148407    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:49.159385    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:49.159472    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:49.169810    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:49.169886    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:49.180328    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:49.180412    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:49.190157    4398 logs.go:276] 0 containers: []
	W0920 10:29:49.190169    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:49.190231    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:49.200556    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:49.200576    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:49.200581    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:49.214339    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:49.214354    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:49.234215    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:49.234225    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:49.245214    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:49.245228    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:49.260909    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:49.260921    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:49.274744    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:49.274755    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:49.309194    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:49.309210    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:49.323793    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:49.323803    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:49.335003    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:49.335012    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:49.346099    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:49.346110    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:49.358415    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:49.358427    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:49.397506    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:49.397523    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:49.403156    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:49.403172    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:49.415257    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:49.415269    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:49.441084    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:49.441093    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:49.483185    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:49.483200    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:49.494930    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:49.494941    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:52.016924    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:29:57.019126    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:29:57.019497    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:29:57.051488    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:29:57.051629    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:29:57.069040    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:29:57.069147    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:29:57.083101    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:29:57.083196    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:29:57.094708    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:29:57.094787    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:29:57.105386    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:29:57.105468    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:29:57.116576    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:29:57.116655    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:29:57.127112    4398 logs.go:276] 0 containers: []
	W0920 10:29:57.127124    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:29:57.127193    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:29:57.137780    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:29:57.137800    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:29:57.137805    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:29:57.175108    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:29:57.175122    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:29:57.179335    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:29:57.179344    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:29:57.220898    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:29:57.220912    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:29:57.241105    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:29:57.241132    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:29:57.257221    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:29:57.257237    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:29:57.270893    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:29:57.270908    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:29:57.284403    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:29:57.284417    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:29:57.299629    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:29:57.299640    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:29:57.311101    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:29:57.311113    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:29:57.322459    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:29:57.322475    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:29:57.334232    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:29:57.334245    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:29:57.359433    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:29:57.359441    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:29:57.393638    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:29:57.393651    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:29:57.405731    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:29:57.405744    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:29:57.422831    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:29:57.422841    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:29:57.437840    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:29:57.437852    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:29:59.952670    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:04.954813    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:04.955094    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:04.979265    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:04.979407    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:04.996375    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:04.996475    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:05.009439    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:05.009523    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:05.022380    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:05.022459    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:05.032416    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:05.032493    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:05.042725    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:05.042801    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:05.059916    4398 logs.go:276] 0 containers: []
	W0920 10:30:05.059928    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:05.059994    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:05.070147    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:05.070163    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:05.070170    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:05.109313    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:05.109324    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:05.144096    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:05.144108    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:05.158338    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:05.158349    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:05.175731    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:05.175742    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:05.187408    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:05.187422    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:05.191862    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:05.191869    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:05.205378    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:05.205388    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:05.243223    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:05.243233    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:05.257655    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:05.257665    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:05.269066    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:05.269079    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:05.280626    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:05.280641    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:05.296217    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:05.296227    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:05.307644    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:05.307656    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:05.319403    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:05.319417    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:05.343266    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:05.343275    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:05.357546    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:05.357558    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:07.871511    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:12.872119    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:12.872440    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:12.900492    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:12.900632    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:12.915263    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:12.915354    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:12.927040    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:12.927131    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:12.938056    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:12.938145    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:12.948392    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:12.948472    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:12.959051    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:12.959124    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:12.969821    4398 logs.go:276] 0 containers: []
	W0920 10:30:12.969832    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:12.969898    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:12.980494    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:12.980514    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:12.980520    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:12.985219    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:12.985228    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:13.000336    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:13.000346    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:13.018783    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:13.018793    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:13.038053    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:13.038064    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:13.076436    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:13.076455    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:13.112283    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:13.112298    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:13.126057    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:13.126068    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:13.141382    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:13.141392    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:13.154436    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:13.154449    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:13.175487    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:13.175499    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:13.201407    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:13.201418    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:13.213349    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:13.213361    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:13.252006    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:13.252019    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:13.270042    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:13.270058    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:13.282207    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:13.282218    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:13.294016    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:13.294027    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:15.807879    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:20.810433    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:20.810631    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:20.827398    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:20.827518    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:20.841191    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:20.841266    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:20.851999    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:20.852087    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:20.863003    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:20.863084    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:20.874651    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:20.874722    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:20.885240    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:20.885322    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:20.898516    4398 logs.go:276] 0 containers: []
	W0920 10:30:20.898530    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:20.898603    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:20.908766    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:20.908788    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:20.908797    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:20.922590    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:20.922599    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:20.936944    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:20.936961    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:20.948227    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:20.948239    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:20.987760    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:20.987773    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:21.002026    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:21.002038    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:21.013947    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:21.013958    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:21.028046    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:21.028058    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:21.040472    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:21.040484    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:21.077719    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:21.077736    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:21.116719    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:21.116736    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:21.127786    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:21.127801    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:21.143448    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:21.143457    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:21.155191    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:21.155201    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:21.180292    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:21.180301    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:21.184926    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:21.184933    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:21.202627    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:21.202640    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:23.715649    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:28.717830    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:28.718018    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:28.735807    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:28.735910    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:28.747061    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:28.747147    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:28.757181    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:28.757254    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:28.768056    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:28.768140    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:28.778793    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:28.778873    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:28.789292    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:28.789374    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:28.800682    4398 logs.go:276] 0 containers: []
	W0920 10:30:28.800694    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:28.800760    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:28.812111    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:28.812132    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:28.812137    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:28.823683    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:28.823698    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:28.828309    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:28.828317    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:28.842274    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:28.842287    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:28.854018    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:28.854030    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:28.865149    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:28.865159    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:28.899825    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:28.899839    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:28.914124    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:28.914139    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:28.951399    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:28.951412    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:28.966727    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:28.966739    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:28.978883    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:28.978897    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:29.017734    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:29.017746    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:29.039273    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:29.039286    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:29.059736    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:29.059750    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:29.073693    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:29.073705    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:29.090876    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:29.090886    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:29.109322    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:29.109335    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:31.635963    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:36.638099    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:36.638398    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:36.662538    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:36.662669    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:36.678614    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:36.678712    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:36.695705    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:36.695783    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:36.706501    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:36.706584    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:36.716486    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:36.716568    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:36.726968    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:36.727039    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:36.736777    4398 logs.go:276] 0 containers: []
	W0920 10:30:36.736789    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:36.736850    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:36.747380    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:36.747404    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:36.747409    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:36.782291    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:36.782304    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:36.797256    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:36.797272    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:36.810818    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:36.810838    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:36.821856    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:36.821866    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:36.846423    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:36.846432    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:36.850322    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:36.850331    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:36.864132    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:36.864143    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:36.876962    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:36.876979    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:36.891182    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:36.891194    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:36.929079    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:36.929091    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:36.942713    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:36.942725    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:36.954150    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:36.954160    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:36.971115    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:36.971127    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:36.982100    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:36.982116    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:37.019520    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:37.019533    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:37.033735    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:37.033750    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:39.553863    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:44.554495    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:44.554638    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:44.567449    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:44.567538    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:44.577640    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:44.577726    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:44.588294    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:44.588372    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:44.598726    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:44.598803    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:44.609262    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:44.609343    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:44.628914    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:44.629000    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:44.640317    4398 logs.go:276] 0 containers: []
	W0920 10:30:44.640330    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:44.640396    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:44.650795    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:44.650815    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:44.650820    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:44.688574    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:44.688585    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:44.727790    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:44.727807    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:44.743522    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:44.743533    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:44.755978    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:44.755993    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:44.773755    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:44.773767    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:44.789654    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:44.789665    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:44.803138    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:44.803151    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:44.807147    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:44.807153    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:44.841433    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:44.841443    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:44.855671    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:44.855683    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:44.867083    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:44.867094    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:44.882870    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:44.882881    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:44.905712    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:44.905720    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:44.919275    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:44.919286    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:44.934306    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:44.934317    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:44.946279    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:44.946292    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:47.459992    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:30:52.461820    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:30:52.462409    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:30:52.503399    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:30:52.503575    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:30:52.522095    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:30:52.522194    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:30:52.536280    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:30:52.536367    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:30:52.547838    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:30:52.547918    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:30:52.558415    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:30:52.558501    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:30:52.568792    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:30:52.568879    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:30:52.579837    4398 logs.go:276] 0 containers: []
	W0920 10:30:52.579848    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:30:52.579911    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:30:52.590249    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:30:52.590266    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:30:52.590271    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:30:52.607496    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:30:52.607508    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:30:52.619393    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:30:52.619408    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:30:52.631008    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:30:52.631023    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:30:52.667255    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:30:52.667263    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:30:52.702500    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:30:52.702512    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:30:52.719515    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:30:52.719529    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:30:52.733017    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:30:52.733030    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:30:52.747353    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:30:52.747366    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:30:52.761477    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:30:52.761487    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:30:52.765848    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:30:52.765854    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:30:52.789306    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:30:52.789317    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:30:52.831158    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:30:52.831170    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:30:52.847920    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:30:52.847935    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:30:52.859227    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:30:52.859237    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:30:52.883619    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:30:52.883633    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:30:52.894859    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:30:52.894872    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:30:55.408264    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:00.410482    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:00.410830    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:00.440019    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:00.440162    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:00.456989    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:00.457109    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:00.470717    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:00.470811    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:00.483053    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:00.483146    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:00.493414    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:00.493498    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:00.504601    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:00.504684    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:00.515536    4398 logs.go:276] 0 containers: []
	W0920 10:31:00.515549    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:00.515621    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:00.526593    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:00.526616    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:00.526622    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:00.537729    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:00.537742    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:00.552738    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:00.552751    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:00.570481    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:00.570492    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:00.585638    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:00.585654    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:00.597105    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:00.597120    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:00.608493    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:00.608505    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:00.627829    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:00.627845    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:00.651054    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:00.651061    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:00.663149    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:00.663160    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:00.676884    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:00.676894    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:00.717351    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:00.717362    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:00.731396    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:00.731406    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:00.748733    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:00.748745    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:00.787872    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:00.787883    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:00.792353    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:00.792360    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:00.831471    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:00.831482    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:03.346590    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:08.349147    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:08.349481    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:08.375132    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:08.375263    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:08.391117    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:08.391210    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:08.403980    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:08.404072    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:08.414823    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:08.414905    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:08.425221    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:08.425326    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:08.436524    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:08.436607    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:08.446837    4398 logs.go:276] 0 containers: []
	W0920 10:31:08.446847    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:08.446916    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:08.457414    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:08.457433    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:08.457440    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:08.471604    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:08.471615    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:08.488775    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:08.488790    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:08.500488    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:08.500498    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:08.524264    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:08.524273    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:08.540161    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:08.540174    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:08.552318    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:08.552328    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:08.590800    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:08.590810    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:08.612910    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:08.612923    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:08.624474    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:08.624487    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:08.659653    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:08.659667    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:08.663848    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:08.663857    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:08.675869    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:08.675884    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:08.691195    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:08.691205    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:08.703093    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:08.703104    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:08.717128    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:08.717137    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:08.729371    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:08.729381    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:11.270258    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:16.272568    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:16.272837    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:16.290344    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:16.290452    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:16.304192    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:16.304287    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:16.315671    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:16.315754    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:16.325958    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:16.326042    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:16.336246    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:16.336331    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:16.347139    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:16.347223    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:16.366134    4398 logs.go:276] 0 containers: []
	W0920 10:31:16.366147    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:16.366221    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:16.376559    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:16.376581    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:16.376586    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:16.387830    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:16.387846    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:16.403154    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:16.403164    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:16.421750    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:16.421767    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:16.460181    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:16.460189    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:16.474447    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:16.474458    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:16.488368    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:16.488379    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:16.499729    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:16.499739    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:16.517057    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:16.517069    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:16.553233    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:16.553245    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:16.598515    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:16.598525    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:16.612940    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:16.612955    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:16.627038    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:16.627055    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:16.644394    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:16.644405    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:16.648849    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:16.648859    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:16.660269    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:16.660280    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:16.671936    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:16.671946    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:19.197698    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:24.199166    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:24.199382    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:24.213690    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:24.213793    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:24.225908    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:24.225999    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:24.236333    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:24.236408    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:24.246509    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:24.246591    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:24.256995    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:24.257079    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:24.268010    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:24.268084    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:24.279314    4398 logs.go:276] 0 containers: []
	W0920 10:31:24.279329    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:24.279396    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:24.290646    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:24.290663    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:24.290669    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:24.294893    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:24.294904    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:24.338241    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:24.338252    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:24.357196    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:24.357210    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:24.369851    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:24.369864    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:24.382216    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:24.382227    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:24.406642    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:24.406651    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:24.421580    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:24.421594    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:24.437905    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:24.437918    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:24.450084    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:24.450098    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:24.462703    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:24.462716    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:24.502202    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:24.502214    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:24.542422    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:24.542438    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:24.554793    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:24.554812    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:24.571166    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:24.571179    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:24.588803    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:24.588822    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:24.615860    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:24.615876    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:27.136382    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:32.137122    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:32.137325    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:32.148756    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:32.148837    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:32.159188    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:32.159275    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:32.169210    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:32.169280    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:32.179409    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:32.179491    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:32.190233    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:32.190314    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:32.200569    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:32.200651    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:32.211296    4398 logs.go:276] 0 containers: []
	W0920 10:31:32.211308    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:32.211380    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:32.223207    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:32.223226    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:32.223232    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:32.261327    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:32.261341    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:32.278213    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:32.278230    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:32.290230    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:32.290242    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:32.304164    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:32.304175    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:32.319089    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:32.319099    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:32.338228    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:32.338239    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:32.362579    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:32.362589    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:32.374464    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:32.374474    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:32.378571    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:32.378578    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:32.396164    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:32.396177    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:32.437322    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:32.437338    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:32.448682    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:32.448694    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:32.464192    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:32.464207    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:32.479539    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:32.479554    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:32.490820    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:32.490833    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:32.502279    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:32.502290    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:35.042177    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:40.042795    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:40.042964    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:40.053659    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:40.053742    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:40.064580    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:40.064696    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:40.074976    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:40.075060    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:40.085491    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:40.085569    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:40.096289    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:40.096371    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:40.106993    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:40.107078    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:40.121548    4398 logs.go:276] 0 containers: []
	W0920 10:31:40.121559    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:40.121624    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:40.132395    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:40.132415    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:40.132421    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:40.150721    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:40.150734    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:40.168185    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:40.168199    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:40.182121    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:40.182132    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:40.216215    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:40.216227    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:40.230450    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:40.230461    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:40.268796    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:40.268806    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:40.283923    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:40.283934    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:40.295755    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:40.295767    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:40.307212    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:40.307224    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:40.318046    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:40.318058    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:40.338475    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:40.338486    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:40.354499    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:40.354511    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:40.378683    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:40.378694    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:40.390379    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:40.390393    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:40.430552    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:40.430561    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:40.434726    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:40.434736    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:42.948920    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:47.951130    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:47.951296    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:47.962305    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:47.962378    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:47.975683    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:47.975770    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:47.986883    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:47.986969    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:47.997574    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:47.997660    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:48.007947    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:48.008032    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:48.019611    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:48.019692    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:48.034553    4398 logs.go:276] 0 containers: []
	W0920 10:31:48.034564    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:48.034632    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:48.044607    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:48.044625    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:48.044631    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:48.058914    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:48.058925    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:48.070359    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:48.070371    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:48.082014    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:48.082027    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:48.100318    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:48.100330    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:48.115115    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:48.115130    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:48.153461    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:48.153470    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:48.157425    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:48.157432    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:48.175466    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:48.175482    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:48.192991    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:48.193004    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:48.215388    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:48.215396    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:48.226974    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:48.226984    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:48.262239    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:48.262251    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:48.277105    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:48.277116    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:48.288751    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:48.288763    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:48.301246    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:48.301261    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:48.340155    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:48.340170    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:50.854534    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:31:55.856659    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:31:55.856978    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:31:55.884372    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:31:55.884520    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:31:55.903383    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:31:55.903477    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:31:55.916475    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:31:55.916563    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:31:55.927519    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:31:55.927610    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:31:55.938309    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:31:55.938391    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:31:55.949194    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:31:55.949280    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:31:55.959222    4398 logs.go:276] 0 containers: []
	W0920 10:31:55.959233    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:31:55.959297    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:31:55.972928    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:31:55.972946    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:31:55.972973    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:31:55.987035    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:31:55.987048    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:31:55.998936    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:31:55.998949    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:31:56.017365    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:31:56.017375    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:31:56.032134    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:31:56.032144    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:31:56.077144    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:31:56.077155    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:31:56.091128    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:31:56.091140    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:31:56.103188    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:31:56.103199    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:31:56.127084    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:31:56.127092    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:31:56.131066    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:31:56.131073    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:31:56.149055    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:31:56.149071    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:31:56.163686    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:31:56.163707    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:31:56.181538    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:31:56.181559    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:31:56.194576    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:31:56.194592    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:31:56.206045    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:31:56.206057    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:31:56.217575    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:31:56.217587    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:31:56.255658    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:31:56.255665    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:31:58.797925    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:03.799377    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:03.799963    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:03.840172    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:32:03.840341    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:03.861837    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:32:03.861972    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:03.876683    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:32:03.876774    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:03.889387    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:32:03.889475    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:03.900133    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:32:03.900216    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:03.911453    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:32:03.911537    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:03.921930    4398 logs.go:276] 0 containers: []
	W0920 10:32:03.921947    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:03.922027    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:03.936683    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:32:03.936703    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:03.936709    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:03.973279    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:32:03.973292    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:32:03.988021    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:32:03.988032    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:04.000368    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:32:04.000380    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:32:04.012733    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:32:04.012748    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:32:04.028601    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:32:04.028611    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:32:04.040750    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:32:04.040765    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:32:04.053100    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:04.053112    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:04.077045    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:32:04.077061    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:32:04.091104    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:32:04.091120    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:32:04.105639    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:32:04.105648    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:32:04.118132    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:32:04.118148    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:32:04.135620    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:04.135630    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:32:04.174690    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:04.174701    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:04.178893    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:32:04.178899    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:32:04.192418    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:32:04.192429    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:32:04.231822    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:32:04.231834    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:32:06.744942    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:11.747506    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:11.747767    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:11.765059    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:32:11.765171    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:11.778326    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:32:11.778415    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:11.788997    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:32:11.789083    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:11.801471    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:32:11.801557    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:11.812410    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:32:11.812497    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:11.823303    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:32:11.823383    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:11.837616    4398 logs.go:276] 0 containers: []
	W0920 10:32:11.837629    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:11.837698    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:11.848507    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:32:11.848527    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:11.848533    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:11.852827    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:32:11.852837    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:32:11.865223    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:32:11.865235    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:32:11.880999    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:32:11.881011    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:32:11.898269    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:11.898283    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:11.921321    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:11.921328    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:32:11.960174    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:32:11.960185    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:32:11.997769    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:32:11.997779    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:32:12.013550    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:32:12.013563    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:12.027184    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:12.027202    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:12.081113    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:32:12.081126    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:32:12.097695    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:32:12.097709    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:32:12.110485    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:32:12.110500    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:32:12.121861    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:32:12.121874    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:32:12.145458    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:32:12.145467    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:32:12.157279    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:32:12.157293    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:32:12.177125    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:32:12.177140    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:32:14.690932    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:19.693012    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:19.693280    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:32:19.723861    4398 logs.go:276] 2 containers: [b050c9dc67e2 9b6d0dc7f9bd]
	I0920 10:32:19.723990    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:32:19.749184    4398 logs.go:276] 2 containers: [2ff0443033c0 39a78cefa13d]
	I0920 10:32:19.749262    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:32:19.768982    4398 logs.go:276] 1 containers: [37540f943097]
	I0920 10:32:19.769061    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:32:19.779567    4398 logs.go:276] 2 containers: [3f00b5e21a20 53b2e9135faf]
	I0920 10:32:19.779644    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:32:19.789977    4398 logs.go:276] 1 containers: [7b21eff837fa]
	I0920 10:32:19.790064    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:32:19.802269    4398 logs.go:276] 2 containers: [9a00ea8241ed 03d7ed98fdba]
	I0920 10:32:19.802351    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:32:19.812189    4398 logs.go:276] 0 containers: []
	W0920 10:32:19.812201    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:32:19.812267    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:32:19.822813    4398 logs.go:276] 2 containers: [a03835069ef8 3f7dded0ee96]
	I0920 10:32:19.822831    4398 logs.go:123] Gathering logs for etcd [39a78cefa13d] ...
	I0920 10:32:19.822836    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39a78cefa13d"
	I0920 10:32:19.837067    4398 logs.go:123] Gathering logs for kube-proxy [7b21eff837fa] ...
	I0920 10:32:19.837078    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b21eff837fa"
	I0920 10:32:19.850072    4398 logs.go:123] Gathering logs for storage-provisioner [3f7dded0ee96] ...
	I0920 10:32:19.850084    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f7dded0ee96"
	I0920 10:32:19.861007    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:32:19.861020    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:32:19.873708    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:32:19.873717    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:32:19.877636    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:32:19.877645    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:32:19.912388    4398 logs.go:123] Gathering logs for kube-apiserver [b050c9dc67e2] ...
	I0920 10:32:19.912403    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b050c9dc67e2"
	I0920 10:32:19.927044    4398 logs.go:123] Gathering logs for kube-controller-manager [9a00ea8241ed] ...
	I0920 10:32:19.927055    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a00ea8241ed"
	I0920 10:32:19.944283    4398 logs.go:123] Gathering logs for etcd [2ff0443033c0] ...
	I0920 10:32:19.944296    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff0443033c0"
	I0920 10:32:19.957899    4398 logs.go:123] Gathering logs for coredns [37540f943097] ...
	I0920 10:32:19.957914    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37540f943097"
	I0920 10:32:19.973210    4398 logs.go:123] Gathering logs for kube-scheduler [53b2e9135faf] ...
	I0920 10:32:19.973223    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53b2e9135faf"
	I0920 10:32:19.988516    4398 logs.go:123] Gathering logs for storage-provisioner [a03835069ef8] ...
	I0920 10:32:19.988526    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a03835069ef8"
	I0920 10:32:20.000248    4398 logs.go:123] Gathering logs for kube-controller-manager [03d7ed98fdba] ...
	I0920 10:32:20.000259    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03d7ed98fdba"
	I0920 10:32:20.014404    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:32:20.014415    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:32:20.037378    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:32:20.037394    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:32:20.074563    4398 logs.go:123] Gathering logs for kube-apiserver [9b6d0dc7f9bd] ...
	I0920 10:32:20.074574    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6d0dc7f9bd"
	I0920 10:32:20.117198    4398 logs.go:123] Gathering logs for kube-scheduler [3f00b5e21a20] ...
	I0920 10:32:20.117210    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f00b5e21a20"
	I0920 10:32:22.630955    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:27.633136    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:27.633223    4398 kubeadm.go:597] duration metric: took 4m3.973096041s to restartPrimaryControlPlane
	W0920 10:32:27.633288    4398 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:32:27.633314    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:32:28.614739    4398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:32:28.619961    4398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:32:28.622996    4398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:32:28.625675    4398 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:32:28.625682    4398 kubeadm.go:157] found existing configuration files:
	
	I0920 10:32:28.625707    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/admin.conf
	I0920 10:32:28.628176    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:32:28.628208    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:32:28.631241    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/kubelet.conf
	I0920 10:32:28.634330    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:32:28.634356    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:32:28.636800    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/controller-manager.conf
	I0920 10:32:28.639446    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:32:28.639465    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:32:28.642629    4398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/scheduler.conf
	I0920 10:32:28.645077    4398 kubeadm.go:163] "https://control-plane.minikube.internal:50520" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50520 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:32:28.645103    4398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:32:28.647687    4398 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:32:28.665316    4398 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:32:28.665370    4398 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:32:28.715500    4398 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:32:28.715551    4398 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:32:28.715604    4398 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:32:28.764379    4398 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:32:28.767516    4398 out.go:235]   - Generating certificates and keys ...
	I0920 10:32:28.767549    4398 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:32:28.767582    4398 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:32:28.767620    4398 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:32:28.767660    4398 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:32:28.767697    4398 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:32:28.767731    4398 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:32:28.767764    4398 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:32:28.767819    4398 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:32:28.767861    4398 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:32:28.767917    4398 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:32:28.767951    4398 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:32:28.767991    4398 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:32:28.957095    4398 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:32:29.062088    4398 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:32:29.244712    4398 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:32:29.347698    4398 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:32:29.379233    4398 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:32:29.379632    4398 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:32:29.379693    4398 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:32:29.453074    4398 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:32:29.461187    4398 out.go:235]   - Booting up control plane ...
	I0920 10:32:29.461242    4398 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:32:29.461280    4398 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:32:29.461327    4398 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:32:29.461371    4398 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:32:29.461462    4398 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:32:34.552928    4398 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001336 seconds
	I0920 10:32:34.553025    4398 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:32:34.558811    4398 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:32:35.066822    4398 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:32:35.066931    4398 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-593000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:32:35.570733    4398 kubeadm.go:310] [bootstrap-token] Using token: v0pk0r.yk1w2751tvqi9mna
	I0920 10:32:35.576334    4398 out.go:235]   - Configuring RBAC rules ...
	I0920 10:32:35.576398    4398 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:32:35.576440    4398 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:32:35.579899    4398 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:32:35.580813    4398 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:32:35.581679    4398 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:32:35.582431    4398 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:32:35.586224    4398 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:32:35.757040    4398 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:32:35.973955    4398 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:32:35.974590    4398 kubeadm.go:310] 
	I0920 10:32:35.974629    4398 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:32:35.974633    4398 kubeadm.go:310] 
	I0920 10:32:35.974670    4398 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:32:35.974696    4398 kubeadm.go:310] 
	I0920 10:32:35.974709    4398 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:32:35.974742    4398 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:32:35.974772    4398 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:32:35.974775    4398 kubeadm.go:310] 
	I0920 10:32:35.974804    4398 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:32:35.974833    4398 kubeadm.go:310] 
	I0920 10:32:35.974896    4398 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:32:35.974901    4398 kubeadm.go:310] 
	I0920 10:32:35.974926    4398 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:32:35.975036    4398 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:32:35.975205    4398 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:32:35.975225    4398 kubeadm.go:310] 
	I0920 10:32:35.975302    4398 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:32:35.975510    4398 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:32:35.975534    4398 kubeadm.go:310] 
	I0920 10:32:35.975722    4398 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v0pk0r.yk1w2751tvqi9mna \
	I0920 10:32:35.975868    4398 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a \
	I0920 10:32:35.975895    4398 kubeadm.go:310] 	--control-plane 
	I0920 10:32:35.975901    4398 kubeadm.go:310] 
	I0920 10:32:35.975994    4398 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:32:35.976066    4398 kubeadm.go:310] 
	I0920 10:32:35.976228    4398 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v0pk0r.yk1w2751tvqi9mna \
	I0920 10:32:35.976289    4398 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c54f44fb14845d147478fdac003d6394686246d8bb3fbe9b7d3ee2f2ff166a3a 
	I0920 10:32:35.976353    4398 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:32:35.976359    4398 cni.go:84] Creating CNI manager for ""
	I0920 10:32:35.976368    4398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:32:35.984565    4398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:32:35.988625    4398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:32:35.991883    4398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:32:35.996385    4398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:32:35.996444    4398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-593000 minikube.k8s.io/updated_at=2024_09_20T10_32_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=stopped-upgrade-593000 minikube.k8s.io/primary=true
	I0920 10:32:35.996445    4398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:32:36.040327    4398 kubeadm.go:1113] duration metric: took 43.922042ms to wait for elevateKubeSystemPrivileges
	I0920 10:32:36.040348    4398 ops.go:34] apiserver oom_adj: -16
	I0920 10:32:36.040380    4398 kubeadm.go:394] duration metric: took 4m12.301018208s to StartCluster
	I0920 10:32:36.040392    4398 settings.go:142] acquiring lock: {Name:mkc8690df96bb5b3a10e10e028bcb5cdae886c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:36.040478    4398 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:32:36.040942    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/kubeconfig: {Name:mk92240b7e07f1d8cacfa83b258a7ee6b4d7270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:36.041168    4398 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:32:36.041191    4398 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:32:36.041249    4398 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-593000"
	I0920 10:32:36.041258    4398 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-593000"
	W0920 10:32:36.041261    4398 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:32:36.041272    4398 host.go:66] Checking if "stopped-upgrade-593000" exists ...
	I0920 10:32:36.041283    4398 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-593000"
	I0920 10:32:36.041294    4398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-593000"
	I0920 10:32:36.041296    4398 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:32:36.044525    4398 out.go:177] * Verifying Kubernetes components...
	I0920 10:32:36.045158    4398 kapi.go:59] client config for stopped-upgrade-593000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/stopped-upgrade-593000/client.key", CAFile:"/Users/jenkins/minikube-integration/19672-1143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102212030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:32:36.047855    4398 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-593000"
	W0920 10:32:36.047861    4398 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:32:36.047870    4398 host.go:66] Checking if "stopped-upgrade-593000" exists ...
	I0920 10:32:36.048436    4398 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:32:36.048442    4398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:32:36.048447    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:32:36.050577    4398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:32:36.054516    4398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:32:36.058576    4398 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:32:36.058583    4398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:32:36.058588    4398 sshutil.go:53] new ssh client: &{IP:localhost Port:50485 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/stopped-upgrade-593000/id_rsa Username:docker}
	I0920 10:32:36.130291    4398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:32:36.135854    4398 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:32:36.135905    4398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:32:36.141247    4398 api_server.go:72] duration metric: took 100.066792ms to wait for apiserver process to appear ...
	I0920 10:32:36.141255    4398 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:32:36.141263    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:36.153200    4398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:32:36.168486    4398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:32:36.544781    4398 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:32:36.544793    4398 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:32:41.141575    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:41.141619    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:46.143288    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:46.143339    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:51.143636    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:51.143666    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:32:56.143977    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:32:56.144004    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:01.144427    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:01.144451    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:06.144928    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:06.144964    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:33:06.547412    4398 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:33:06.551841    4398 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:33:06.559622    4398 addons.go:510] duration metric: took 30.5185825s for enable addons: enabled=[storage-provisioner]
	I0920 10:33:11.145758    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:11.145781    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:16.146644    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:16.146670    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:21.147776    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:21.147821    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:26.149257    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:26.149298    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:31.151129    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:31.151153    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:36.153340    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:36.153516    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:33:36.177555    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:33:36.177667    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:33:36.196191    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:33:36.196278    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:33:36.206547    4398 logs.go:276] 2 containers: [901631c13925 314744d41af9]
	I0920 10:33:36.206631    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:33:36.217044    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:33:36.217133    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:33:36.226893    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:33:36.226968    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:33:36.237306    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:33:36.237389    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:33:36.247522    4398 logs.go:276] 0 containers: []
	W0920 10:33:36.247540    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:33:36.247630    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:33:36.258108    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:33:36.258125    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:33:36.258131    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:33:36.262586    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:33:36.262593    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:33:36.276349    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:33:36.276364    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:33:36.288101    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:33:36.288112    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:33:36.299813    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:33:36.299823    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:33:36.338927    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:33:36.338939    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:33:36.385516    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:33:36.385532    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:33:36.405147    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:33:36.405161    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:33:36.416230    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:33:36.416243    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:33:36.434633    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:33:36.434648    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:33:36.455671    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:33:36.455688    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:33:36.472439    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:33:36.472453    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:33:36.496052    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:33:36.496062    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:33:39.009477    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:44.010254    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:44.010418    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:33:44.021504    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:33:44.021599    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:33:44.031968    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:33:44.032056    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:33:44.042988    4398 logs.go:276] 2 containers: [901631c13925 314744d41af9]
	I0920 10:33:44.043075    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:33:44.053394    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:33:44.053476    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:33:44.063963    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:33:44.064037    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:33:44.074847    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:33:44.074927    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:33:44.085624    4398 logs.go:276] 0 containers: []
	W0920 10:33:44.085636    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:33:44.085704    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:33:44.096261    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:33:44.096277    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:33:44.096282    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:33:44.131200    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:33:44.131212    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:33:44.149335    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:33:44.149346    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:33:44.161202    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:33:44.161213    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:33:44.178246    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:33:44.178258    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:33:44.190060    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:33:44.190072    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:33:44.228167    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:33:44.228177    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:33:44.232495    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:33:44.232503    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:33:44.246774    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:33:44.246785    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:33:44.265671    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:33:44.265685    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:33:44.276802    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:33:44.276812    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:33:44.288366    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:33:44.288377    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:33:44.311967    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:33:44.311976    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:33:46.828740    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:51.831677    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:51.832201    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:33:51.881548    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:33:51.881695    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:33:51.901578    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:33:51.901697    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:33:51.915348    4398 logs.go:276] 2 containers: [901631c13925 314744d41af9]
	I0920 10:33:51.915426    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:33:51.927119    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:33:51.927202    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:33:51.938031    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:33:51.938115    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:33:51.948322    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:33:51.948405    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:33:51.958907    4398 logs.go:276] 0 containers: []
	W0920 10:33:51.958919    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:33:51.958987    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:33:51.969628    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:33:51.969645    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:33:51.969650    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:33:51.973734    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:33:51.973742    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:33:51.992396    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:33:51.992410    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:33:52.016043    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:33:52.016052    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:33:52.027588    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:33:52.027598    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:33:52.042421    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:33:52.042435    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:33:52.059002    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:33:52.059017    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:33:52.078274    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:33:52.078295    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:33:52.118978    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:33:52.119001    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:33:52.161620    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:33:52.161634    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:33:52.177756    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:33:52.177777    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:33:52.197630    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:33:52.197645    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:33:52.211454    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:33:52.211472    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:33:54.726121    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:33:59.728362    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:33:59.728925    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:33:59.767521    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:33:59.767693    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:33:59.789762    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:33:59.789886    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:33:59.804708    4398 logs.go:276] 2 containers: [901631c13925 314744d41af9]
	I0920 10:33:59.804800    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:33:59.817720    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:33:59.817802    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:33:59.828450    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:33:59.828533    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:33:59.840845    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:33:59.840933    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:33:59.851247    4398 logs.go:276] 0 containers: []
	W0920 10:33:59.851260    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:33:59.851329    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:33:59.861367    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:33:59.861380    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:33:59.861386    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:33:59.896026    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:33:59.896039    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:33:59.912294    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:33:59.912306    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:33:59.923616    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:33:59.923632    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:33:59.935143    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:33:59.935156    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:33:59.947132    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:33:59.947145    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:33:59.983288    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:33:59.983297    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:33:59.987522    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:33:59.987530    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:34:00.002141    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:34:00.002153    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:34:00.020976    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:34:00.020988    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:34:00.036043    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:34:00.036056    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:34:00.053567    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:34:00.053578    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:34:00.078004    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:34:00.078011    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:34:02.591957    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:34:07.594241    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:34:07.594549    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:34:07.620757    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:34:07.620892    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:34:07.636907    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:34:07.637006    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:34:07.649716    4398 logs.go:276] 2 containers: [901631c13925 314744d41af9]
	I0920 10:34:07.649802    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:34:07.661037    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:34:07.661113    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:34:07.671259    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:34:07.671339    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:34:07.681843    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:34:07.681923    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:34:07.692236    4398 logs.go:276] 0 containers: []
	W0920 10:34:07.692250    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:34:07.692321    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:34:07.703240    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:34:07.703255    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:34:07.703261    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:34:07.741311    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:34:07.741319    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:34:07.775917    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:34:07.775933    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:34:07.787546    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:34:07.787562    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:34:07.798636    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:34:07.798647    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:34:07.811342    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:34:07.811355    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:34:07.822857    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:34:07.822866    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:34:07.846369    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:34:07.846383    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:34:07.857704    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:34:07.857714    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:34:07.861774    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:34:07.861782    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:34:07.878048    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:34:07.878059    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:34:07.891619    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:34:07.891639    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:34:07.906669    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:34:07.906679    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:34:10.426300    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:34:15.428944    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:34:15.429070    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:34:15.439960    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:34:15.440038    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:34:15.450504    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:34:15.450589    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:34:15.460896    4398 logs.go:276] 2 containers: [901631c13925 314744d41af9]
	I0920 10:34:15.460969    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:34:15.471547    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:34:15.471628    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:34:15.481685    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:34:15.481773    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:34:15.492059    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:34:15.492144    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:34:15.502229    4398 logs.go:276] 0 containers: []
	W0920 10:34:15.502242    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:34:15.502310    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:34:15.512566    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:34:15.512583    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:34:15.512588    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:34:15.528764    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:34:15.528774    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:34:15.541965    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:34:15.541977    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:34:15.578074    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:34:15.578081    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:34:15.581994    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:34:15.582003    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:34:15.596592    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:34:15.596605    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:34:15.607856    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:34:15.607870    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:34:15.624675    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:34:15.624689    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:34:15.635744    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:34:15.635754    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:34:15.652895    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:34:15.652906    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:34:15.678276    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:34:15.678283    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:34:15.712615    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:34:15.712626    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:34:15.727415    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:34:15.727426    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:34:18.240832    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:34:23.243623    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:34:23.244199    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:34:23.282538    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:34:23.282681    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:34:23.302640    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:34:23.302746    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:34:23.317232    4398 logs.go:276] 2 containers: [901631c13925 314744d41af9]
	I0920 10:34:23.317326    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:34:23.328942    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:34:23.329014    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:34:23.340088    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:34:23.340174    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:34:23.351071    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:34:23.351149    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:34:23.361765    4398 logs.go:276] 0 containers: []
	W0920 10:34:23.361777    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:34:23.361841    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:34:23.372704    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:34:23.372720    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:34:23.372726    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:34:23.409509    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:34:23.409518    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:34:23.423916    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:34:23.423926    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:34:23.438432    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:34:23.438442    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:34:23.450130    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:34:23.450144    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:34:23.462036    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:34:23.462044    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:34:23.474120    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:34:23.474130    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:34:23.492040    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:34:23.492050    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:34:23.517505    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:34:23.517511    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:34:23.521626    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:34:23.521633    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:34:23.556557    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:34:23.556568    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:34:23.572722    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:34:23.572735    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:34:23.584704    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:34:23.584715    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:34:26.098689    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:34:31.101273    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:34:31.101371    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:34:31.117485    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:34:31.117568    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:34:31.128416    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:34:31.128491    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:34:31.139631    4398 logs.go:276] 2 containers: [901631c13925 314744d41af9]
	I0920 10:34:31.139712    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:34:31.154248    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:34:31.154318    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:34:31.164527    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:34:31.164599    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:34:31.175868    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:34:31.175953    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:34:31.186203    4398 logs.go:276] 0 containers: []
	W0920 10:34:31.186213    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:34:31.186273    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:34:31.196626    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:34:31.196641    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:34:31.196647    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:34:31.212383    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:34:31.212396    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:34:31.224031    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:34:31.224040    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:34:31.241515    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:34:31.241525    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:34:31.277742    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:34:31.277750    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:34:31.282351    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:34:31.282358    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:34:31.296500    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:34:31.296510    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:34:31.310891    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:34:31.310900    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:34:31.322240    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:34:31.322251    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:34:31.346648    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:34:31.346656    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:34:31.359028    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:34:31.359037    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:34:31.396426    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:34:31.396440    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:34:31.412683    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:34:31.412696    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:34:33.925740    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:34:38.928058    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:34:38.928228    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:34:38.951566    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:34:38.951676    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:34:38.966777    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:34:38.966867    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:34:38.983293    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:34:38.983385    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:34:38.994468    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:34:38.994541    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:34:39.011626    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:34:39.011703    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:34:39.022565    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:34:39.022647    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:34:39.032449    4398 logs.go:276] 0 containers: []
	W0920 10:34:39.032460    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:34:39.032537    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:34:39.043634    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:34:39.043652    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:34:39.043657    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:34:39.055538    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:34:39.055552    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:34:39.073543    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:34:39.073553    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:34:39.077993    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:34:39.077999    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:34:39.092533    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:34:39.092542    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:34:39.104678    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:34:39.104688    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:34:39.116881    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:34:39.116892    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:34:39.129401    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:34:39.129413    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:34:39.144480    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:34:39.144492    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:34:39.158479    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:34:39.158495    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:34:39.183732    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:34:39.183741    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:34:39.219317    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:34:39.219331    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:34:39.230589    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:34:39.230601    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:34:39.245793    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:34:39.245807    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:34:39.281744    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:34:39.281753    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:34:41.795545    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:34:46.798300    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:34:46.798694    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:34:46.830010    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:34:46.830182    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:34:46.849591    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:34:46.849693    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:34:46.864338    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:34:46.864432    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:34:46.883267    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:34:46.883346    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:34:46.894668    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:34:46.894746    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:34:46.907820    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:34:46.907906    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:34:46.918912    4398 logs.go:276] 0 containers: []
	W0920 10:34:46.918922    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:34:46.918986    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:34:46.930194    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:34:46.930212    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:34:46.930218    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:34:46.934498    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:34:46.934503    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:34:46.949014    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:34:46.949025    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:34:46.967280    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:34:46.967290    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:34:46.983438    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:34:46.983450    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:34:47.000911    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:34:47.000921    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:34:47.016962    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:34:47.016975    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:34:47.029700    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:34:47.029711    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:34:47.041392    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:34:47.041403    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:34:47.079494    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:34:47.079524    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:34:47.116473    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:34:47.116484    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:34:47.128314    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:34:47.128328    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:34:47.140470    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:34:47.140481    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:34:47.166206    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:34:47.166214    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:34:47.178239    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:34:47.178249    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:34:49.691971    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:34:54.694482    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:34:54.695058    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:34:54.736512    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:34:54.736671    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:34:54.758815    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:34:54.758928    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:34:54.774500    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:34:54.774598    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:34:54.787611    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:34:54.787702    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:34:54.799615    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:34:54.799695    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:34:54.810943    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:34:54.811023    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:34:54.821876    4398 logs.go:276] 0 containers: []
	W0920 10:34:54.821887    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:34:54.821949    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:34:54.832650    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:34:54.832668    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:34:54.832674    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:34:54.848106    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:34:54.848116    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:34:54.884712    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:34:54.884720    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:34:54.920574    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:34:54.920584    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:34:54.935321    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:34:54.935333    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:34:54.951258    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:34:54.951269    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:34:54.955777    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:34:54.955788    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:34:54.968791    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:34:54.968802    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:34:54.983168    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:34:54.983178    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:34:55.001664    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:34:55.001673    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:34:55.013811    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:34:55.013820    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:34:55.029750    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:34:55.029767    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:34:55.042806    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:34:55.042819    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:34:55.057427    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:34:55.057437    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:34:55.083982    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:34:55.083997    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:34:57.599442    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:35:02.601995    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:35:02.602600    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:35:02.642496    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:35:02.642641    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:35:02.664963    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:35:02.665056    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:35:02.681795    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:35:02.681898    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:35:02.695759    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:35:02.695842    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:35:02.708176    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:35:02.708259    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:35:02.720781    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:35:02.720860    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:35:02.733004    4398 logs.go:276] 0 containers: []
	W0920 10:35:02.733019    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:35:02.733103    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:35:02.745088    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:35:02.745108    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:35:02.745114    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:35:02.785182    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:35:02.785199    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:35:02.806979    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:35:02.806995    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:35:02.820935    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:35:02.820947    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:35:02.846437    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:35:02.846447    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:35:02.850855    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:35:02.850863    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:35:02.865629    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:35:02.865640    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:35:02.878133    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:35:02.878144    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:35:02.890007    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:35:02.890018    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:35:02.927445    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:35:02.927458    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:35:02.939823    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:35:02.939831    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:35:02.951226    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:35:02.951237    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:35:02.967083    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:35:02.967097    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:35:02.982064    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:35:02.982078    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:35:02.994536    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:35:02.994551    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:35:05.508177    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:35:10.510483    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:35:10.510990    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:35:10.551103    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:35:10.551259    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:35:10.573217    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:35:10.573345    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:35:10.588676    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:35:10.588755    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:35:10.601086    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:35:10.601172    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:35:10.611663    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:35:10.611730    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:35:10.623106    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:35:10.623206    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:35:10.632675    4398 logs.go:276] 0 containers: []
	W0920 10:35:10.632687    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:35:10.632747    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:35:10.643399    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:35:10.643415    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:35:10.643421    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:35:10.677775    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:35:10.677787    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:35:10.689586    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:35:10.689601    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:35:10.701178    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:35:10.701189    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:35:10.705838    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:35:10.705843    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:35:10.716638    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:35:10.716651    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:35:10.731589    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:35:10.731602    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:35:10.756565    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:35:10.756573    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:35:10.767836    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:35:10.767851    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:35:10.782643    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:35:10.782654    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:35:10.794641    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:35:10.794651    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:35:10.805955    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:35:10.805968    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:35:10.818113    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:35:10.818126    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:35:10.832183    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:35:10.832193    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:35:10.849076    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:35:10.849087    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:35:13.387891    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:35:18.390131    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:35:18.390206    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:35:18.401879    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:35:18.401946    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:35:18.413024    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:35:18.413107    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:35:18.425655    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:35:18.425725    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:35:18.436977    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:35:18.437043    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:35:18.447629    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:35:18.447705    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:35:18.458714    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:35:18.458779    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:35:18.470035    4398 logs.go:276] 0 containers: []
	W0920 10:35:18.470049    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:35:18.470104    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:35:18.481118    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:35:18.481132    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:35:18.481137    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:35:18.520913    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:35:18.520926    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:35:18.532747    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:35:18.532760    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:35:18.548390    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:35:18.548408    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:35:18.562059    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:35:18.562070    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:35:18.588057    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:35:18.588075    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:35:18.627987    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:35:18.627999    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:35:18.640846    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:35:18.640858    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:35:18.655064    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:35:18.655076    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:35:18.667415    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:35:18.667426    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:35:18.685219    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:35:18.685231    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:35:18.700260    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:35:18.700271    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:35:18.705343    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:35:18.705354    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:35:18.721892    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:35:18.721904    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:35:18.734632    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:35:18.734643    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:35:21.255276    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:35:26.257880    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:35:26.258196    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:35:26.295702    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:35:26.295830    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:35:26.311458    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:35:26.311556    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:35:26.330369    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:35:26.330444    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:35:26.340724    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:35:26.340812    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:35:26.351018    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:35:26.351100    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:35:26.361366    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:35:26.361444    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:35:26.371603    4398 logs.go:276] 0 containers: []
	W0920 10:35:26.371614    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:35:26.371673    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:35:26.382324    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:35:26.382339    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:35:26.382345    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:35:26.396353    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:35:26.396369    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:35:26.413429    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:35:26.413444    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:35:26.424889    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:35:26.424905    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:35:26.439150    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:35:26.439160    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:35:26.457100    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:35:26.457111    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:35:26.468324    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:35:26.468337    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:35:26.479792    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:35:26.479803    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:35:26.491318    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:35:26.491329    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:35:26.515036    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:35:26.515044    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:35:26.548895    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:35:26.548906    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:35:26.560339    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:35:26.560350    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:35:26.575886    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:35:26.575896    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:35:26.587131    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:35:26.587141    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:35:26.623129    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:35:26.623137    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:35:29.129433    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:35:34.132141    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:35:34.132611    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:35:34.169200    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:35:34.169362    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:35:34.199244    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:35:34.199327    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:35:34.215059    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:35:34.215149    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:35:34.227905    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:35:34.227991    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:35:34.239951    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:35:34.240042    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:35:34.250567    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:35:34.250648    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:35:34.261002    4398 logs.go:276] 0 containers: []
	W0920 10:35:34.261017    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:35:34.261086    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:35:34.277142    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:35:34.277162    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:35:34.277167    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:35:34.281519    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:35:34.281528    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:35:34.293782    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:35:34.293796    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:35:34.307642    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:35:34.307655    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:35:34.323281    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:35:34.323293    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:35:34.341194    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:35:34.341206    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:35:34.353242    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:35:34.353257    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:35:34.365319    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:35:34.365334    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:35:34.379800    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:35:34.379813    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:35:34.404407    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:35:34.404417    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:35:34.441207    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:35:34.441216    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:35:34.475533    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:35:34.475548    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:35:34.490776    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:35:34.490786    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:35:34.505008    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:35:34.505023    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:35:34.523449    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:35:34.523462    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:35:37.037523    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:35:42.040435    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:35:42.040547    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:35:42.052287    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:35:42.052381    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:35:42.064165    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:35:42.064268    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:35:42.075551    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:35:42.075629    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:35:42.088901    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:35:42.088997    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:35:42.100522    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:35:42.100612    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:35:42.111773    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:35:42.111855    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:35:42.123579    4398 logs.go:276] 0 containers: []
	W0920 10:35:42.123590    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:35:42.123672    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:35:42.135057    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:35:42.135077    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:35:42.135083    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:35:42.153966    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:35:42.153979    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:35:42.166799    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:35:42.166812    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:35:42.181958    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:35:42.181973    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:35:42.195139    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:35:42.195154    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:35:42.208393    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:35:42.208405    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:35:42.225909    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:35:42.225924    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:35:42.241352    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:35:42.241367    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:35:42.262396    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:35:42.262408    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:35:42.266916    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:35:42.266930    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:35:42.280827    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:35:42.280837    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:35:42.307544    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:35:42.307558    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:35:42.347861    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:35:42.347883    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:35:42.388932    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:35:42.388945    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:35:42.402684    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:35:42.402699    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:35:44.917729    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:35:49.920474    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:35:49.920744    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:35:49.947834    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:35:49.947998    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:35:49.965593    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:35:49.965716    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:35:49.979720    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:35:49.979818    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:35:49.991257    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:35:49.991340    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:35:50.002638    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:35:50.002734    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:35:50.013373    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:35:50.013457    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:35:50.024029    4398 logs.go:276] 0 containers: []
	W0920 10:35:50.024043    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:35:50.024123    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:35:50.034507    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:35:50.034524    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:35:50.034530    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:35:50.045919    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:35:50.045930    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:35:50.057911    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:35:50.057921    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:35:50.072134    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:35:50.072144    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:35:50.083171    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:35:50.083182    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:35:50.098368    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:35:50.098379    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:35:50.116135    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:35:50.116146    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:35:50.143286    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:35:50.143295    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:35:50.157281    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:35:50.157294    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:35:50.168709    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:35:50.168723    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:35:50.179796    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:35:50.179809    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:35:50.192469    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:35:50.192484    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:35:50.229156    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:35:50.229165    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:35:50.263746    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:35:50.263760    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:35:50.267902    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:35:50.267910    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:35:52.782034    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:35:57.784912    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:35:57.785529    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:35:57.820309    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:35:57.820469    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:35:57.847610    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:35:57.847717    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:35:57.861937    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:35:57.862027    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:35:57.875536    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:35:57.875618    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:35:57.886081    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:35:57.886163    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:35:57.896435    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:35:57.896513    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:35:57.908693    4398 logs.go:276] 0 containers: []
	W0920 10:35:57.908706    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:35:57.908777    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:35:57.919366    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:35:57.919385    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:35:57.919390    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:35:57.957113    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:35:57.957121    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:35:57.961617    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:35:57.961623    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:35:57.996675    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:35:57.996690    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:35:58.011517    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:35:58.011527    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:35:58.031700    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:35:58.031714    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:35:58.044485    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:35:58.044496    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:35:58.056595    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:35:58.056605    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:35:58.071224    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:35:58.071236    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:35:58.082280    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:35:58.082291    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:35:58.095714    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:35:58.095726    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:35:58.120479    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:35:58.120487    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:35:58.131651    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:35:58.131664    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:35:58.151639    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:35:58.151654    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:35:58.166395    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:35:58.166404    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:36:00.686673    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:36:05.689364    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:36:05.689452    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:36:05.705117    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:36:05.705237    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:36:05.717221    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:36:05.717311    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:36:05.729653    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:36:05.729745    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:36:05.740829    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:36:05.740907    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:36:05.753834    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:36:05.753922    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:36:05.765345    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:36:05.765426    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:36:05.776845    4398 logs.go:276] 0 containers: []
	W0920 10:36:05.776859    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:36:05.776938    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:36:05.789835    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:36:05.789855    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:36:05.789861    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:36:05.828647    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:36:05.828663    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:36:05.844786    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:36:05.844799    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:36:05.861438    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:36:05.861451    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:36:05.874067    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:36:05.874081    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:36:05.878799    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:36:05.878811    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:36:05.916203    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:36:05.916216    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:36:05.932210    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:36:05.932219    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:36:05.943817    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:36:05.943828    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:36:05.956964    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:36:05.956976    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:36:05.970090    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:36:05.970104    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:36:05.994609    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:36:05.994622    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:36:06.008014    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:36:06.008029    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:36:06.021276    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:36:06.021284    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:36:06.036476    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:36:06.036493    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:36:08.563093    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:36:13.565612    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:36:13.566227    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:36:13.612679    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:36:13.612841    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:36:13.663443    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:36:13.663531    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:36:13.675705    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:36:13.675790    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:36:13.691706    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:36:13.691784    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:36:13.702181    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:36:13.702258    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:36:13.712618    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:36:13.712704    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:36:13.722955    4398 logs.go:276] 0 containers: []
	W0920 10:36:13.722969    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:36:13.723036    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:36:13.733491    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:36:13.733512    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:36:13.733518    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:36:13.772301    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:36:13.772311    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:36:13.786558    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:36:13.786573    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:36:13.798392    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:36:13.798402    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:36:13.813532    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:36:13.813542    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:36:13.818229    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:36:13.818238    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:36:13.832281    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:36:13.832292    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:36:13.844273    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:36:13.844283    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:36:13.860762    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:36:13.860772    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:36:13.871910    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:36:13.871924    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:36:13.883590    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:36:13.883603    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:36:13.918124    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:36:13.918136    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:36:13.930197    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:36:13.930212    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:36:13.948161    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:36:13.948172    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:36:13.971094    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:36:13.971102    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:36:16.483141    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:36:21.485921    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:36:21.486536    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:36:21.528160    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:36:21.528319    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:36:21.550352    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:36:21.550492    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:36:21.565892    4398 logs.go:276] 4 containers: [fc118384e680 69cdbaa76850 901631c13925 314744d41af9]
	I0920 10:36:21.565979    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:36:21.578891    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:36:21.578974    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:36:21.589545    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:36:21.589620    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:36:21.600385    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:36:21.600462    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:36:21.610590    4398 logs.go:276] 0 containers: []
	W0920 10:36:21.610607    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:36:21.610680    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:36:21.621049    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:36:21.621065    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:36:21.621070    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:36:21.625757    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:36:21.625767    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:36:21.639971    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:36:21.639982    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:36:21.651223    4398 logs.go:123] Gathering logs for coredns [901631c13925] ...
	I0920 10:36:21.651236    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 901631c13925"
	I0920 10:36:21.664073    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:36:21.664087    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:36:21.679053    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:36:21.679063    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:36:21.689983    4398 logs.go:123] Gathering logs for coredns [314744d41af9] ...
	I0920 10:36:21.689992    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 314744d41af9"
	I0920 10:36:21.701526    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:36:21.701544    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:36:21.724989    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:36:21.724997    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:36:21.760737    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:36:21.760752    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:36:21.774485    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:36:21.774496    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:36:21.785702    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:36:21.785712    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:36:21.822924    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:36:21.822937    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:36:21.834760    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:36:21.834770    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:36:21.858689    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:36:21.858701    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:36:24.372452    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:36:29.375289    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:36:29.375829    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:36:29.417970    4398 logs.go:276] 1 containers: [dded8c0fe7a7]
	I0920 10:36:29.418121    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:36:29.436483    4398 logs.go:276] 1 containers: [d1e7f8492f7b]
	I0920 10:36:29.436616    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:36:29.450584    4398 logs.go:276] 4 containers: [47cb5d2a85b9 d5e69a422fde fc118384e680 69cdbaa76850]
	I0920 10:36:29.450680    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:36:29.464918    4398 logs.go:276] 1 containers: [34f214c52885]
	I0920 10:36:29.465005    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:36:29.475820    4398 logs.go:276] 1 containers: [7808b2a392ae]
	I0920 10:36:29.475911    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:36:29.486166    4398 logs.go:276] 1 containers: [125a59f648ac]
	I0920 10:36:29.486236    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:36:29.497013    4398 logs.go:276] 0 containers: []
	W0920 10:36:29.497024    4398 logs.go:278] No container was found matching "kindnet"
	I0920 10:36:29.497099    4398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:36:29.507680    4398 logs.go:276] 1 containers: [64c3f96f5ce3]
	I0920 10:36:29.507699    4398 logs.go:123] Gathering logs for kubelet ...
	I0920 10:36:29.507705    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:36:29.543809    4398 logs.go:123] Gathering logs for dmesg ...
	I0920 10:36:29.543819    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:36:29.547958    4398 logs.go:123] Gathering logs for storage-provisioner [64c3f96f5ce3] ...
	I0920 10:36:29.547966    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c3f96f5ce3"
	I0920 10:36:29.560036    4398 logs.go:123] Gathering logs for container status ...
	I0920 10:36:29.560050    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:36:29.571388    4398 logs.go:123] Gathering logs for kube-apiserver [dded8c0fe7a7] ...
	I0920 10:36:29.571398    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dded8c0fe7a7"
	I0920 10:36:29.586843    4398 logs.go:123] Gathering logs for etcd [d1e7f8492f7b] ...
	I0920 10:36:29.586854    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1e7f8492f7b"
	I0920 10:36:29.601488    4398 logs.go:123] Gathering logs for coredns [d5e69a422fde] ...
	I0920 10:36:29.601498    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e69a422fde"
	I0920 10:36:29.619594    4398 logs.go:123] Gathering logs for Docker ...
	I0920 10:36:29.619606    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:36:29.642545    4398 logs.go:123] Gathering logs for coredns [47cb5d2a85b9] ...
	I0920 10:36:29.642554    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47cb5d2a85b9"
	I0920 10:36:29.653889    4398 logs.go:123] Gathering logs for kube-scheduler [34f214c52885] ...
	I0920 10:36:29.653903    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f214c52885"
	I0920 10:36:29.669182    4398 logs.go:123] Gathering logs for kube-controller-manager [125a59f648ac] ...
	I0920 10:36:29.669193    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 125a59f648ac"
	I0920 10:36:29.687063    4398 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:36:29.687074    4398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:36:29.721410    4398 logs.go:123] Gathering logs for coredns [fc118384e680] ...
	I0920 10:36:29.721422    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc118384e680"
	I0920 10:36:29.733036    4398 logs.go:123] Gathering logs for coredns [69cdbaa76850] ...
	I0920 10:36:29.733049    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69cdbaa76850"
	I0920 10:36:29.747587    4398 logs.go:123] Gathering logs for kube-proxy [7808b2a392ae] ...
	I0920 10:36:29.747599    4398 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7808b2a392ae"
	I0920 10:36:32.261469    4398 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:36:37.263719    4398 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:36:37.268780    4398 out.go:201] 
	W0920 10:36:37.281030    4398 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 10:36:37.281065    4398 out.go:270] * 
	* 
	W0920 10:36:37.283194    4398 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:36:37.297733    4398 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-593000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.92s)

                                                
                                    
x
+
TestPause/serial/Start (10s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-935000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-935000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.939379416s)

                                                
                                                
-- stdout --
	* [pause-935000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-935000" primary control-plane node in "pause-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-935000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-935000 -n pause-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-935000 -n pause-935000: exit status 7 (61.240417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-315000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-315000 --driver=qemu2 : exit status 80 (9.87856575s)

                                                
                                                
-- stdout --
	* [NoKubernetes-315000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-315000" primary control-plane node in "NoKubernetes-315000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-315000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-315000 -n NoKubernetes-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-315000 -n NoKubernetes-315000: exit status 7 (65.152375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-315000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-315000 --no-kubernetes --driver=qemu2 : exit status 80 (5.262207542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-315000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-315000
	* Restarting existing qemu2 VM for "NoKubernetes-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-315000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-315000 -n NoKubernetes-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-315000 -n NoKubernetes-315000: exit status 7 (65.939042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-315000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-315000 --no-kubernetes --driver=qemu2 : exit status 80 (5.2480815s)

                                                
                                                
-- stdout --
	* [NoKubernetes-315000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-315000
	* Restarting existing qemu2 VM for "NoKubernetes-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-315000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-315000 -n NoKubernetes-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-315000 -n NoKubernetes-315000: exit status 7 (55.22275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-315000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-315000 --driver=qemu2 : exit status 80 (5.267415541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-315000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-315000
	* Restarting existing qemu2 VM for "NoKubernetes-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-315000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-315000 -n NoKubernetes-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-315000 -n NoKubernetes-315000: exit status 7 (32.688459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0920 10:35:02.477191    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.809192s)

                                                
                                                
-- stdout --
	* [auto-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-692000" primary control-plane node in "auto-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:34:55.211969    4936 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:34:55.212118    4936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:55.212121    4936 out.go:358] Setting ErrFile to fd 2...
	I0920 10:34:55.212124    4936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:55.212240    4936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:34:55.213278    4936 out.go:352] Setting JSON to false
	I0920 10:34:55.229784    4936 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3858,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:34:55.229860    4936 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:34:55.236927    4936 out.go:177] * [auto-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:34:55.244560    4936 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:34:55.244609    4936 notify.go:220] Checking for updates...
	I0920 10:34:55.250723    4936 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:34:55.253687    4936 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:34:55.256706    4936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:34:55.259689    4936 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:34:55.261176    4936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:34:55.265033    4936 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:34:55.265100    4936 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:34:55.265147    4936 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:34:55.269695    4936 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:34:55.275642    4936 start.go:297] selected driver: qemu2
	I0920 10:34:55.275646    4936 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:34:55.275651    4936 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:34:55.277809    4936 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:34:55.280739    4936 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:34:55.283832    4936 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:34:55.283857    4936 cni.go:84] Creating CNI manager for ""
	I0920 10:34:55.283890    4936 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:34:55.283902    4936 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:34:55.283926    4936 start.go:340] cluster config:
	{Name:auto-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:34:55.287831    4936 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:34:55.295662    4936 out.go:177] * Starting "auto-692000" primary control-plane node in "auto-692000" cluster
	I0920 10:34:55.299651    4936 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:34:55.299665    4936 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:34:55.299672    4936 cache.go:56] Caching tarball of preloaded images
	I0920 10:34:55.299736    4936 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:34:55.299747    4936 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:34:55.299812    4936 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/auto-692000/config.json ...
	I0920 10:34:55.299826    4936 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/auto-692000/config.json: {Name:mk91c24ed61d73bb668ada5ea53cab87167258ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:34:55.300324    4936 start.go:360] acquireMachinesLock for auto-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:34:55.300365    4936 start.go:364] duration metric: took 33.375µs to acquireMachinesLock for "auto-692000"
	I0920 10:34:55.300379    4936 start.go:93] Provisioning new machine with config: &{Name:auto-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:34:55.300415    4936 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:34:55.309618    4936 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:34:55.326311    4936 start.go:159] libmachine.API.Create for "auto-692000" (driver="qemu2")
	I0920 10:34:55.326340    4936 client.go:168] LocalClient.Create starting
	I0920 10:34:55.326416    4936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:34:55.326463    4936 main.go:141] libmachine: Decoding PEM data...
	I0920 10:34:55.326471    4936 main.go:141] libmachine: Parsing certificate...
	I0920 10:34:55.326507    4936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:34:55.326531    4936 main.go:141] libmachine: Decoding PEM data...
	I0920 10:34:55.326540    4936 main.go:141] libmachine: Parsing certificate...
	I0920 10:34:55.326943    4936 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:34:55.489414    4936 main.go:141] libmachine: Creating SSH key...
	I0920 10:34:55.539235    4936 main.go:141] libmachine: Creating Disk image...
	I0920 10:34:55.539242    4936 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:34:55.539435    4936 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2
	I0920 10:34:55.548632    4936 main.go:141] libmachine: STDOUT: 
	I0920 10:34:55.548654    4936 main.go:141] libmachine: STDERR: 
	I0920 10:34:55.548720    4936 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2 +20000M
	I0920 10:34:55.556530    4936 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:34:55.556550    4936 main.go:141] libmachine: STDERR: 
	I0920 10:34:55.556564    4936 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2
	I0920 10:34:55.556568    4936 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:34:55.556579    4936 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:34:55.556605    4936 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:96:9b:61:86:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2
	I0920 10:34:55.558388    4936 main.go:141] libmachine: STDOUT: 
	I0920 10:34:55.558403    4936 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:34:55.558424    4936 client.go:171] duration metric: took 232.079542ms to LocalClient.Create
	I0920 10:34:57.560625    4936 start.go:128] duration metric: took 2.260187917s to createHost
	I0920 10:34:57.560722    4936 start.go:83] releasing machines lock for "auto-692000", held for 2.260358916s
	W0920 10:34:57.560799    4936 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:34:57.572181    4936 out.go:177] * Deleting "auto-692000" in qemu2 ...
	W0920 10:34:57.611181    4936 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:34:57.611214    4936 start.go:729] Will try again in 5 seconds ...
	I0920 10:35:02.613398    4936 start.go:360] acquireMachinesLock for auto-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:02.613920    4936 start.go:364] duration metric: took 431.042µs to acquireMachinesLock for "auto-692000"
	I0920 10:35:02.614008    4936 start.go:93] Provisioning new machine with config: &{Name:auto-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:02.614249    4936 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:02.625845    4936 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:02.662547    4936 start.go:159] libmachine.API.Create for "auto-692000" (driver="qemu2")
	I0920 10:35:02.662594    4936 client.go:168] LocalClient.Create starting
	I0920 10:35:02.662702    4936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:02.662765    4936 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:02.662782    4936 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:02.662847    4936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:02.662886    4936 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:02.662899    4936 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:02.663463    4936 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:02.831586    4936 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:02.917400    4936 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:02.917413    4936 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:02.917653    4936 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2
	I0920 10:35:02.927459    4936 main.go:141] libmachine: STDOUT: 
	I0920 10:35:02.927484    4936 main.go:141] libmachine: STDERR: 
	I0920 10:35:02.927548    4936 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2 +20000M
	I0920 10:35:02.936647    4936 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:02.936686    4936 main.go:141] libmachine: STDERR: 
	I0920 10:35:02.936699    4936 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2
	I0920 10:35:02.936705    4936 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:02.936713    4936 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:02.936757    4936 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:e4:24:d4:40:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/auto-692000/disk.qcow2
	I0920 10:35:02.938907    4936 main.go:141] libmachine: STDOUT: 
	I0920 10:35:02.938931    4936 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:02.938947    4936 client.go:171] duration metric: took 276.347292ms to LocalClient.Create
	I0920 10:35:04.941146    4936 start.go:128] duration metric: took 2.326866166s to createHost
	I0920 10:35:04.941249    4936 start.go:83] releasing machines lock for "auto-692000", held for 2.327296625s
	W0920 10:35:04.941618    4936 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:04.959302    4936 out.go:201] 
	W0920 10:35:04.962301    4936 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:35:04.962328    4936 out.go:270] * 
	* 
	W0920 10:35:04.965171    4936 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:35:04.979262    4936 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.959729s)

                                                
                                                
-- stdout --
	* [flannel-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-692000" primary control-plane node in "flannel-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:35:07.157076    5045 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:35:07.157237    5045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:07.157240    5045 out.go:358] Setting ErrFile to fd 2...
	I0920 10:35:07.157242    5045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:07.157390    5045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:35:07.158458    5045 out.go:352] Setting JSON to false
	I0920 10:35:07.174868    5045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3870,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:35:07.174941    5045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:35:07.181890    5045 out.go:177] * [flannel-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:35:07.189573    5045 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:35:07.189626    5045 notify.go:220] Checking for updates...
	I0920 10:35:07.195700    5045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:35:07.197258    5045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:35:07.200822    5045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:35:07.203694    5045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:35:07.206723    5045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:35:07.210082    5045 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:35:07.210148    5045 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:35:07.210195    5045 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:35:07.214700    5045 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:35:07.221695    5045 start.go:297] selected driver: qemu2
	I0920 10:35:07.221701    5045 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:35:07.221707    5045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:35:07.224117    5045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:35:07.226742    5045 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:35:07.229866    5045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:35:07.229893    5045 cni.go:84] Creating CNI manager for "flannel"
	I0920 10:35:07.229898    5045 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0920 10:35:07.229941    5045 start.go:340] cluster config:
	{Name:flannel-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:35:07.233446    5045 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:35:07.240712    5045 out.go:177] * Starting "flannel-692000" primary control-plane node in "flannel-692000" cluster
	I0920 10:35:07.243702    5045 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:35:07.243718    5045 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:35:07.243727    5045 cache.go:56] Caching tarball of preloaded images
	I0920 10:35:07.243798    5045 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:35:07.243804    5045 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:35:07.243869    5045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/flannel-692000/config.json ...
	I0920 10:35:07.243881    5045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/flannel-692000/config.json: {Name:mk26ff87a9e3e8b897c1965110315d35fdb3b50a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:35:07.244096    5045 start.go:360] acquireMachinesLock for flannel-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:07.244128    5045 start.go:364] duration metric: took 26.083µs to acquireMachinesLock for "flannel-692000"
	I0920 10:35:07.244140    5045 start.go:93] Provisioning new machine with config: &{Name:flannel-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:07.244163    5045 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:07.251588    5045 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:07.266993    5045 start.go:159] libmachine.API.Create for "flannel-692000" (driver="qemu2")
	I0920 10:35:07.267023    5045 client.go:168] LocalClient.Create starting
	I0920 10:35:07.267086    5045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:07.267121    5045 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:07.267130    5045 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:07.267165    5045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:07.267188    5045 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:07.267201    5045 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:07.267619    5045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:07.431729    5045 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:07.553786    5045 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:07.553795    5045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:07.553996    5045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2
	I0920 10:35:07.563442    5045 main.go:141] libmachine: STDOUT: 
	I0920 10:35:07.563460    5045 main.go:141] libmachine: STDERR: 
	I0920 10:35:07.563519    5045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2 +20000M
	I0920 10:35:07.571410    5045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:07.571432    5045 main.go:141] libmachine: STDERR: 
	I0920 10:35:07.571457    5045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2
	I0920 10:35:07.571463    5045 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:07.571476    5045 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:07.571507    5045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:95:a4:96:af:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2
	I0920 10:35:07.573101    5045 main.go:141] libmachine: STDOUT: 
	I0920 10:35:07.573125    5045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:07.573143    5045 client.go:171] duration metric: took 306.116333ms to LocalClient.Create
	I0920 10:35:09.575365    5045 start.go:128] duration metric: took 2.331181333s to createHost
	I0920 10:35:09.575463    5045 start.go:83] releasing machines lock for "flannel-692000", held for 2.331336333s
	W0920 10:35:09.575634    5045 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:09.595052    5045 out.go:177] * Deleting "flannel-692000" in qemu2 ...
	W0920 10:35:09.629263    5045 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:09.629298    5045 start.go:729] Will try again in 5 seconds ...
	I0920 10:35:14.631555    5045 start.go:360] acquireMachinesLock for flannel-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:14.632115    5045 start.go:364] duration metric: took 450.834µs to acquireMachinesLock for "flannel-692000"
	I0920 10:35:14.632187    5045 start.go:93] Provisioning new machine with config: &{Name:flannel-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:14.632486    5045 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:14.641221    5045 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:14.691826    5045 start.go:159] libmachine.API.Create for "flannel-692000" (driver="qemu2")
	I0920 10:35:14.691883    5045 client.go:168] LocalClient.Create starting
	I0920 10:35:14.692022    5045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:14.692090    5045 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:14.692113    5045 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:14.692182    5045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:14.692229    5045 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:14.692242    5045 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:14.692775    5045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:14.866972    5045 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:15.020587    5045 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:15.020598    5045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:15.020823    5045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2
	I0920 10:35:15.030634    5045 main.go:141] libmachine: STDOUT: 
	I0920 10:35:15.030652    5045 main.go:141] libmachine: STDERR: 
	I0920 10:35:15.030706    5045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2 +20000M
	I0920 10:35:15.038740    5045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:15.038755    5045 main.go:141] libmachine: STDERR: 
	I0920 10:35:15.038767    5045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2
	I0920 10:35:15.038773    5045 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:15.038793    5045 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:15.038830    5045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:78:1e:30:ee:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/flannel-692000/disk.qcow2
	I0920 10:35:15.040528    5045 main.go:141] libmachine: STDOUT: 
	I0920 10:35:15.040551    5045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:15.040562    5045 client.go:171] duration metric: took 348.67375ms to LocalClient.Create
	I0920 10:35:17.042789    5045 start.go:128] duration metric: took 2.410275708s to createHost
	I0920 10:35:17.042870    5045 start.go:83] releasing machines lock for "flannel-692000", held for 2.410743375s
	W0920 10:35:17.043216    5045 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:17.052883    5045 out.go:201] 
	W0920 10:35:17.061915    5045 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:35:17.061944    5045 out.go:270] * 
	* 
	W0920 10:35:17.064555    5045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:35:17.072902    5045 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.816860583s)

                                                
                                                
-- stdout --
	* [kindnet-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-692000" primary control-plane node in "kindnet-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:35:19.465566    5162 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:35:19.465686    5162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:19.465689    5162 out.go:358] Setting ErrFile to fd 2...
	I0920 10:35:19.465692    5162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:19.465849    5162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:35:19.467073    5162 out.go:352] Setting JSON to false
	I0920 10:35:19.483887    5162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3882,"bootTime":1726849837,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:35:19.483959    5162 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:35:19.490618    5162 out.go:177] * [kindnet-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:35:19.498448    5162 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:35:19.498485    5162 notify.go:220] Checking for updates...
	I0920 10:35:19.505392    5162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:35:19.508395    5162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:35:19.511363    5162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:35:19.514381    5162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:35:19.517369    5162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:35:19.520690    5162 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:35:19.520757    5162 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:35:19.520799    5162 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:35:19.525427    5162 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:35:19.532418    5162 start.go:297] selected driver: qemu2
	I0920 10:35:19.532426    5162 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:35:19.532433    5162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:35:19.534602    5162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:35:19.537448    5162 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:35:19.539155    5162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:35:19.539179    5162 cni.go:84] Creating CNI manager for "kindnet"
	I0920 10:35:19.539183    5162 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 10:35:19.539226    5162 start.go:340] cluster config:
	{Name:kindnet-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:35:19.542592    5162 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:35:19.549430    5162 out.go:177] * Starting "kindnet-692000" primary control-plane node in "kindnet-692000" cluster
	I0920 10:35:19.553338    5162 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:35:19.553355    5162 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:35:19.553368    5162 cache.go:56] Caching tarball of preloaded images
	I0920 10:35:19.553424    5162 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:35:19.553429    5162 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:35:19.553487    5162 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/kindnet-692000/config.json ...
	I0920 10:35:19.553497    5162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/kindnet-692000/config.json: {Name:mk4c4061da0d61209353bea41c2ca72e741ff53d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:35:19.553705    5162 start.go:360] acquireMachinesLock for kindnet-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:19.553735    5162 start.go:364] duration metric: took 24.834µs to acquireMachinesLock for "kindnet-692000"
	I0920 10:35:19.553746    5162 start.go:93] Provisioning new machine with config: &{Name:kindnet-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:19.553778    5162 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:19.562354    5162 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:19.577269    5162 start.go:159] libmachine.API.Create for "kindnet-692000" (driver="qemu2")
	I0920 10:35:19.577298    5162 client.go:168] LocalClient.Create starting
	I0920 10:35:19.577364    5162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:19.577397    5162 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:19.577406    5162 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:19.577445    5162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:19.577467    5162 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:19.577481    5162 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:19.577858    5162 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:19.745562    5162 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:19.808260    5162 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:19.808267    5162 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:19.808442    5162 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2
	I0920 10:35:19.817710    5162 main.go:141] libmachine: STDOUT: 
	I0920 10:35:19.817732    5162 main.go:141] libmachine: STDERR: 
	I0920 10:35:19.817794    5162 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2 +20000M
	I0920 10:35:19.825882    5162 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:19.825896    5162 main.go:141] libmachine: STDERR: 
	I0920 10:35:19.825916    5162 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2
	I0920 10:35:19.825923    5162 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:19.825936    5162 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:19.825964    5162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:0f:24:63:06:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2
	I0920 10:35:19.827637    5162 main.go:141] libmachine: STDOUT: 
	I0920 10:35:19.827650    5162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:19.827669    5162 client.go:171] duration metric: took 250.366792ms to LocalClient.Create
	I0920 10:35:21.829875    5162 start.go:128] duration metric: took 2.276079042s to createHost
	I0920 10:35:21.829960    5162 start.go:83] releasing machines lock for "kindnet-692000", held for 2.276227542s
	W0920 10:35:21.830025    5162 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:21.841020    5162 out.go:177] * Deleting "kindnet-692000" in qemu2 ...
	W0920 10:35:21.878368    5162 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:21.878393    5162 start.go:729] Will try again in 5 seconds ...
	I0920 10:35:26.880477    5162 start.go:360] acquireMachinesLock for kindnet-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:26.880741    5162 start.go:364] duration metric: took 233.708µs to acquireMachinesLock for "kindnet-692000"
	I0920 10:35:26.880808    5162 start.go:93] Provisioning new machine with config: &{Name:kindnet-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:26.880917    5162 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:26.894260    5162 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:26.925398    5162 start.go:159] libmachine.API.Create for "kindnet-692000" (driver="qemu2")
	I0920 10:35:26.925451    5162 client.go:168] LocalClient.Create starting
	I0920 10:35:26.925549    5162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:26.925602    5162 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:26.925616    5162 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:26.925690    5162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:26.925725    5162 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:26.925737    5162 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:26.926189    5162 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:27.094753    5162 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:27.188409    5162 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:27.188416    5162 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:27.188603    5162 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2
	I0920 10:35:27.198098    5162 main.go:141] libmachine: STDOUT: 
	I0920 10:35:27.198113    5162 main.go:141] libmachine: STDERR: 
	I0920 10:35:27.198196    5162 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2 +20000M
	I0920 10:35:27.206378    5162 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:27.206393    5162 main.go:141] libmachine: STDERR: 
	I0920 10:35:27.206416    5162 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2
	I0920 10:35:27.206422    5162 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:27.206431    5162 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:27.206456    5162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a2:7b:d6:8c:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kindnet-692000/disk.qcow2
	I0920 10:35:27.208196    5162 main.go:141] libmachine: STDOUT: 
	I0920 10:35:27.208211    5162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:27.208222    5162 client.go:171] duration metric: took 282.766875ms to LocalClient.Create
	I0920 10:35:29.210310    5162 start.go:128] duration metric: took 2.329391208s to createHost
	I0920 10:35:29.210352    5162 start.go:83] releasing machines lock for "kindnet-692000", held for 2.32961125s
	W0920 10:35:29.210562    5162 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:29.223990    5162 out.go:201] 
	W0920 10:35:29.226998    5162 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:35:29.227010    5162 out.go:270] * 
	* 
	W0920 10:35:29.228005    5162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:35:29.244187    5162 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.945271333s)

                                                
                                                
-- stdout --
	* [enable-default-cni-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-692000" primary control-plane node in "enable-default-cni-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:35:31.547966    5279 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:35:31.548089    5279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:31.548092    5279 out.go:358] Setting ErrFile to fd 2...
	I0920 10:35:31.548101    5279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:31.548223    5279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:35:31.549359    5279 out.go:352] Setting JSON to false
	I0920 10:35:31.565862    5279 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3894,"bootTime":1726849837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:35:31.565958    5279 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:35:31.573085    5279 out.go:177] * [enable-default-cni-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:35:31.581039    5279 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:35:31.581070    5279 notify.go:220] Checking for updates...
	I0920 10:35:31.588001    5279 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:35:31.591124    5279 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:35:31.594007    5279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:35:31.596959    5279 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:35:31.600076    5279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:35:31.603366    5279 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:35:31.603432    5279 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:35:31.603482    5279 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:35:31.608013    5279 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:35:31.615090    5279 start.go:297] selected driver: qemu2
	I0920 10:35:31.615098    5279 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:35:31.615106    5279 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:35:31.617422    5279 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:35:31.620989    5279 out.go:177] * Automatically selected the socket_vmnet network
	E0920 10:35:31.624123    5279 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0920 10:35:31.624138    5279 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:35:31.624174    5279 cni.go:84] Creating CNI manager for "bridge"
	I0920 10:35:31.624178    5279 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:35:31.624210    5279 start.go:340] cluster config:
	{Name:enable-default-cni-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:35:31.628060    5279 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:35:31.633019    5279 out.go:177] * Starting "enable-default-cni-692000" primary control-plane node in "enable-default-cni-692000" cluster
	I0920 10:35:31.637020    5279 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:35:31.637035    5279 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:35:31.637045    5279 cache.go:56] Caching tarball of preloaded images
	I0920 10:35:31.637102    5279 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:35:31.637107    5279 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:35:31.637171    5279 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/enable-default-cni-692000/config.json ...
	I0920 10:35:31.637182    5279 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/enable-default-cni-692000/config.json: {Name:mkf8d1e1a4e96dfc9e5ab1e17e78f6817854b6a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:35:31.637418    5279 start.go:360] acquireMachinesLock for enable-default-cni-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:31.637456    5279 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "enable-default-cni-692000"
	I0920 10:35:31.637469    5279 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:31.637502    5279 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:31.645988    5279 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:31.663123    5279 start.go:159] libmachine.API.Create for "enable-default-cni-692000" (driver="qemu2")
	I0920 10:35:31.663148    5279 client.go:168] LocalClient.Create starting
	I0920 10:35:31.663216    5279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:31.663249    5279 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:31.663262    5279 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:31.663302    5279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:31.663328    5279 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:31.663335    5279 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:31.663690    5279 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:31.826811    5279 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:31.932444    5279 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:31.932452    5279 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:31.932716    5279 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2
	I0920 10:35:31.942251    5279 main.go:141] libmachine: STDOUT: 
	I0920 10:35:31.942274    5279 main.go:141] libmachine: STDERR: 
	I0920 10:35:31.942363    5279 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2 +20000M
	I0920 10:35:31.950335    5279 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:31.950356    5279 main.go:141] libmachine: STDERR: 
	I0920 10:35:31.950373    5279 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2
	I0920 10:35:31.950377    5279 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:31.950391    5279 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:31.950427    5279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:1e:9a:03:45:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2
	I0920 10:35:31.952090    5279 main.go:141] libmachine: STDOUT: 
	I0920 10:35:31.952105    5279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:31.952131    5279 client.go:171] duration metric: took 288.979375ms to LocalClient.Create
	I0920 10:35:33.954339    5279 start.go:128] duration metric: took 2.316818584s to createHost
	I0920 10:35:33.954441    5279 start.go:83] releasing machines lock for "enable-default-cni-692000", held for 2.31698725s
	W0920 10:35:33.954552    5279 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:33.971974    5279 out.go:177] * Deleting "enable-default-cni-692000" in qemu2 ...
	W0920 10:35:34.005034    5279 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:34.005061    5279 start.go:729] Will try again in 5 seconds ...
	I0920 10:35:39.007304    5279 start.go:360] acquireMachinesLock for enable-default-cni-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:39.007962    5279 start.go:364] duration metric: took 538.125µs to acquireMachinesLock for "enable-default-cni-692000"
	I0920 10:35:39.008039    5279 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:39.008341    5279 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:39.018994    5279 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:39.062557    5279 start.go:159] libmachine.API.Create for "enable-default-cni-692000" (driver="qemu2")
	I0920 10:35:39.062616    5279 client.go:168] LocalClient.Create starting
	I0920 10:35:39.062720    5279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:39.062778    5279 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:39.062797    5279 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:39.062883    5279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:39.062924    5279 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:39.062933    5279 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:39.063546    5279 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:39.234933    5279 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:39.395202    5279 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:39.395214    5279 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:39.395435    5279 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2
	I0920 10:35:39.405555    5279 main.go:141] libmachine: STDOUT: 
	I0920 10:35:39.405575    5279 main.go:141] libmachine: STDERR: 
	I0920 10:35:39.405652    5279 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2 +20000M
	I0920 10:35:39.414812    5279 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:39.414845    5279 main.go:141] libmachine: STDERR: 
	I0920 10:35:39.414859    5279 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2
	I0920 10:35:39.414865    5279 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:39.414876    5279 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:39.414915    5279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:23:5b:2c:ee:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/enable-default-cni-692000/disk.qcow2
	I0920 10:35:39.417067    5279 main.go:141] libmachine: STDOUT: 
	I0920 10:35:39.417083    5279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:39.417109    5279 client.go:171] duration metric: took 354.487875ms to LocalClient.Create
	I0920 10:35:41.419344    5279 start.go:128] duration metric: took 2.410976s to createHost
	I0920 10:35:41.419426    5279 start.go:83] releasing machines lock for "enable-default-cni-692000", held for 2.4114485s
	W0920 10:35:41.419782    5279 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:41.433257    5279 out.go:201] 
	W0920 10:35:41.437409    5279 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:35:41.437457    5279 out.go:270] * 
	* 
	W0920 10:35:41.439951    5279 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:35:41.450335    5279 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.856379125s)

                                                
                                                
-- stdout --
	* [bridge-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-692000" primary control-plane node in "bridge-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:35:43.702823    5388 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:35:43.702954    5388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:43.702958    5388 out.go:358] Setting ErrFile to fd 2...
	I0920 10:35:43.702961    5388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:43.703074    5388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:35:43.704154    5388 out.go:352] Setting JSON to false
	I0920 10:35:43.720367    5388 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3906,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:35:43.720441    5388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:35:43.727058    5388 out.go:177] * [bridge-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:35:43.731822    5388 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:35:43.731853    5388 notify.go:220] Checking for updates...
	I0920 10:35:43.739805    5388 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:35:43.742855    5388 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:35:43.745854    5388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:35:43.748805    5388 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:35:43.751798    5388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:35:43.755070    5388 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:35:43.755136    5388 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:35:43.755181    5388 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:35:43.762778    5388 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:35:43.769779    5388 start.go:297] selected driver: qemu2
	I0920 10:35:43.769785    5388 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:35:43.769791    5388 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:35:43.771998    5388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:35:43.775834    5388 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:35:43.779915    5388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:35:43.779946    5388 cni.go:84] Creating CNI manager for "bridge"
	I0920 10:35:43.779950    5388 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:35:43.780024    5388 start.go:340] cluster config:
	{Name:bridge-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:35:43.783720    5388 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:35:43.791783    5388 out.go:177] * Starting "bridge-692000" primary control-plane node in "bridge-692000" cluster
	I0920 10:35:43.795752    5388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:35:43.795766    5388 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:35:43.795774    5388 cache.go:56] Caching tarball of preloaded images
	I0920 10:35:43.795833    5388 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:35:43.795838    5388 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:35:43.795896    5388 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/bridge-692000/config.json ...
	I0920 10:35:43.795907    5388 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/bridge-692000/config.json: {Name:mkade9c7b9717be7afec223f88125d6668476c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:35:43.796508    5388 start.go:360] acquireMachinesLock for bridge-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:43.796543    5388 start.go:364] duration metric: took 29.083µs to acquireMachinesLock for "bridge-692000"
	I0920 10:35:43.796555    5388 start.go:93] Provisioning new machine with config: &{Name:bridge-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:43.796587    5388 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:43.804775    5388 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:43.822936    5388 start.go:159] libmachine.API.Create for "bridge-692000" (driver="qemu2")
	I0920 10:35:43.822973    5388 client.go:168] LocalClient.Create starting
	I0920 10:35:43.823032    5388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:43.823063    5388 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:43.823073    5388 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:43.823108    5388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:43.823131    5388 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:43.823140    5388 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:43.823648    5388 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:43.987007    5388 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:44.129293    5388 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:44.129302    5388 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:44.129530    5388 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2
	I0920 10:35:44.139286    5388 main.go:141] libmachine: STDOUT: 
	I0920 10:35:44.139305    5388 main.go:141] libmachine: STDERR: 
	I0920 10:35:44.139374    5388 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2 +20000M
	I0920 10:35:44.147682    5388 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:44.147696    5388 main.go:141] libmachine: STDERR: 
	I0920 10:35:44.147713    5388 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2
	I0920 10:35:44.147719    5388 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:44.147734    5388 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:44.147763    5388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:da:a7:5c:b8:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2
	I0920 10:35:44.149447    5388 main.go:141] libmachine: STDOUT: 
	I0920 10:35:44.149460    5388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:44.149479    5388 client.go:171] duration metric: took 326.502708ms to LocalClient.Create
	I0920 10:35:46.151703    5388 start.go:128] duration metric: took 2.355099917s to createHost
	I0920 10:35:46.151805    5388 start.go:83] releasing machines lock for "bridge-692000", held for 2.355265208s
	W0920 10:35:46.151898    5388 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:46.158743    5388 out.go:177] * Deleting "bridge-692000" in qemu2 ...
	W0920 10:35:46.185551    5388 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:46.185574    5388 start.go:729] Will try again in 5 seconds ...
	I0920 10:35:51.187729    5388 start.go:360] acquireMachinesLock for bridge-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:51.188080    5388 start.go:364] duration metric: took 268.125µs to acquireMachinesLock for "bridge-692000"
	I0920 10:35:51.188182    5388 start.go:93] Provisioning new machine with config: &{Name:bridge-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:51.188405    5388 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:51.197993    5388 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:51.235370    5388 start.go:159] libmachine.API.Create for "bridge-692000" (driver="qemu2")
	I0920 10:35:51.235424    5388 client.go:168] LocalClient.Create starting
	I0920 10:35:51.235550    5388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:51.235612    5388 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:51.235628    5388 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:51.235679    5388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:51.235720    5388 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:51.235731    5388 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:51.236287    5388 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:51.404459    5388 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:51.460108    5388 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:51.460115    5388 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:51.460319    5388 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2
	I0920 10:35:51.469878    5388 main.go:141] libmachine: STDOUT: 
	I0920 10:35:51.469899    5388 main.go:141] libmachine: STDERR: 
	I0920 10:35:51.469958    5388 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2 +20000M
	I0920 10:35:51.478041    5388 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:51.478054    5388 main.go:141] libmachine: STDERR: 
	I0920 10:35:51.478074    5388 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2
	I0920 10:35:51.478079    5388 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:51.478096    5388 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:51.478124    5388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:aa:8c:a1:9a:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/bridge-692000/disk.qcow2
	I0920 10:35:51.479815    5388 main.go:141] libmachine: STDOUT: 
	I0920 10:35:51.479827    5388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:51.479846    5388 client.go:171] duration metric: took 244.416834ms to LocalClient.Create
	I0920 10:35:53.482023    5388 start.go:128] duration metric: took 2.293596375s to createHost
	I0920 10:35:53.482114    5388 start.go:83] releasing machines lock for "bridge-692000", held for 2.294027s
	W0920 10:35:53.482411    5388 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:53.495919    5388 out.go:201] 
	W0920 10:35:53.500033    5388 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:35:53.500053    5388 out.go:270] * 
	* 
	W0920 10:35:53.502050    5388 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:35:53.516944    5388 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.847289708s)

                                                
                                                
-- stdout --
	* [kubenet-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-692000" primary control-plane node in "kubenet-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:35:55.741128    5504 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:35:55.741282    5504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:55.741286    5504 out.go:358] Setting ErrFile to fd 2...
	I0920 10:35:55.741288    5504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:35:55.741422    5504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:35:55.742541    5504 out.go:352] Setting JSON to false
	I0920 10:35:55.758713    5504 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3918,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:35:55.758783    5504 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:35:55.765493    5504 out.go:177] * [kubenet-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:35:55.772465    5504 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:35:55.772520    5504 notify.go:220] Checking for updates...
	I0920 10:35:55.778365    5504 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:35:55.781490    5504 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:35:55.784396    5504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:35:55.787376    5504 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:35:55.790417    5504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:35:55.793725    5504 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:35:55.793787    5504 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:35:55.793824    5504 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:35:55.797378    5504 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:35:55.804391    5504 start.go:297] selected driver: qemu2
	I0920 10:35:55.804398    5504 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:35:55.804404    5504 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:35:55.806655    5504 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:35:55.810461    5504 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:35:55.814660    5504 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:35:55.814693    5504 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0920 10:35:55.814735    5504 start.go:340] cluster config:
	{Name:kubenet-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:35:55.818418    5504 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:35:55.825395    5504 out.go:177] * Starting "kubenet-692000" primary control-plane node in "kubenet-692000" cluster
	I0920 10:35:55.829403    5504 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:35:55.829416    5504 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:35:55.829424    5504 cache.go:56] Caching tarball of preloaded images
	I0920 10:35:55.829477    5504 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:35:55.829483    5504 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:35:55.829540    5504 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/kubenet-692000/config.json ...
	I0920 10:35:55.829550    5504 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/kubenet-692000/config.json: {Name:mk19079190596f9c472875a9c7463dd08e73c083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:35:55.829763    5504 start.go:360] acquireMachinesLock for kubenet-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:35:55.829792    5504 start.go:364] duration metric: took 24µs to acquireMachinesLock for "kubenet-692000"
	I0920 10:35:55.829803    5504 start.go:93] Provisioning new machine with config: &{Name:kubenet-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:35:55.829825    5504 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:35:55.838380    5504 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:35:55.853682    5504 start.go:159] libmachine.API.Create for "kubenet-692000" (driver="qemu2")
	I0920 10:35:55.853715    5504 client.go:168] LocalClient.Create starting
	I0920 10:35:55.853774    5504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:35:55.853807    5504 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:55.853816    5504 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:55.853862    5504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:35:55.853886    5504 main.go:141] libmachine: Decoding PEM data...
	I0920 10:35:55.853895    5504 main.go:141] libmachine: Parsing certificate...
	I0920 10:35:55.854225    5504 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:35:56.017636    5504 main.go:141] libmachine: Creating SSH key...
	I0920 10:35:56.142114    5504 main.go:141] libmachine: Creating Disk image...
	I0920 10:35:56.142122    5504 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:35:56.142323    5504 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2
	I0920 10:35:56.151850    5504 main.go:141] libmachine: STDOUT: 
	I0920 10:35:56.151868    5504 main.go:141] libmachine: STDERR: 
	I0920 10:35:56.151926    5504 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2 +20000M
	I0920 10:35:56.160160    5504 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:35:56.160174    5504 main.go:141] libmachine: STDERR: 
	I0920 10:35:56.160203    5504 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2
	I0920 10:35:56.160208    5504 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:35:56.160225    5504 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:35:56.160253    5504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:b2:ea:fe:29:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2
	I0920 10:35:56.161971    5504 main.go:141] libmachine: STDOUT: 
	I0920 10:35:56.161986    5504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:35:56.162003    5504 client.go:171] duration metric: took 308.283042ms to LocalClient.Create
	I0920 10:35:58.162793    5504 start.go:128] duration metric: took 2.332973125s to createHost
	I0920 10:35:58.162809    5504 start.go:83] releasing machines lock for "kubenet-692000", held for 2.333025042s
	W0920 10:35:58.162824    5504 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:58.170357    5504 out.go:177] * Deleting "kubenet-692000" in qemu2 ...
	W0920 10:35:58.186160    5504 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:35:58.186168    5504 start.go:729] Will try again in 5 seconds ...
	I0920 10:36:03.187660    5504 start.go:360] acquireMachinesLock for kubenet-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:03.188212    5504 start.go:364] duration metric: took 435.791µs to acquireMachinesLock for "kubenet-692000"
	I0920 10:36:03.188340    5504 start.go:93] Provisioning new machine with config: &{Name:kubenet-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:03.188567    5504 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:03.199198    5504 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:36:03.248166    5504 start.go:159] libmachine.API.Create for "kubenet-692000" (driver="qemu2")
	I0920 10:36:03.248236    5504 client.go:168] LocalClient.Create starting
	I0920 10:36:03.248353    5504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:03.248431    5504 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:03.248448    5504 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:03.248526    5504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:03.248573    5504 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:03.248588    5504 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:03.249310    5504 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:03.421721    5504 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:03.488867    5504 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:03.488873    5504 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:03.489080    5504 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2
	I0920 10:36:03.498365    5504 main.go:141] libmachine: STDOUT: 
	I0920 10:36:03.498387    5504 main.go:141] libmachine: STDERR: 
	I0920 10:36:03.498445    5504 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2 +20000M
	I0920 10:36:03.506702    5504 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:03.506721    5504 main.go:141] libmachine: STDERR: 
	I0920 10:36:03.506739    5504 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2
	I0920 10:36:03.506744    5504 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:03.506757    5504 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:03.506784    5504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:8d:62:aa:7f:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/kubenet-692000/disk.qcow2
	I0920 10:36:03.508686    5504 main.go:141] libmachine: STDOUT: 
	I0920 10:36:03.508702    5504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:03.508719    5504 client.go:171] duration metric: took 260.479334ms to LocalClient.Create
	I0920 10:36:05.511028    5504 start.go:128] duration metric: took 2.322435917s to createHost
	I0920 10:36:05.511113    5504 start.go:83] releasing machines lock for "kubenet-692000", held for 2.322891375s
	W0920 10:36:05.511613    5504 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:05.527193    5504 out.go:201] 
	W0920 10:36:05.531247    5504 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:36:05.531307    5504 out.go:270] * 
	* 
	W0920 10:36:05.533451    5504 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:36:05.546104    5504 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.927316459s)

                                                
                                                
-- stdout --
	* [custom-flannel-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-692000" primary control-plane node in "custom-flannel-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:36:07.806936    5613 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:36:07.807072    5613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:07.807075    5613 out.go:358] Setting ErrFile to fd 2...
	I0920 10:36:07.807077    5613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:07.807209    5613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:36:07.808310    5613 out.go:352] Setting JSON to false
	I0920 10:36:07.824920    5613 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3930,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:36:07.824997    5613 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:36:07.832472    5613 out.go:177] * [custom-flannel-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:36:07.840263    5613 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:36:07.840300    5613 notify.go:220] Checking for updates...
	I0920 10:36:07.846372    5613 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:36:07.849216    5613 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:36:07.852268    5613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:36:07.855287    5613 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:36:07.858275    5613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:36:07.861560    5613 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:36:07.861624    5613 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:36:07.861668    5613 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:36:07.866252    5613 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:36:07.873252    5613 start.go:297] selected driver: qemu2
	I0920 10:36:07.873259    5613 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:36:07.873274    5613 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:36:07.875504    5613 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:36:07.878262    5613 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:36:07.881339    5613 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:36:07.881363    5613 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0920 10:36:07.881374    5613 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0920 10:36:07.881408    5613 start.go:340] cluster config:
	{Name:custom-flannel-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:36:07.884764    5613 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:36:07.892298    5613 out.go:177] * Starting "custom-flannel-692000" primary control-plane node in "custom-flannel-692000" cluster
	I0920 10:36:07.896222    5613 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:36:07.896234    5613 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:36:07.896240    5613 cache.go:56] Caching tarball of preloaded images
	I0920 10:36:07.896291    5613 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:36:07.896297    5613 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:36:07.896346    5613 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/custom-flannel-692000/config.json ...
	I0920 10:36:07.896357    5613 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/custom-flannel-692000/config.json: {Name:mk9481d4693b3c7abcbd0c366b96850c5081705a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:36:07.896566    5613 start.go:360] acquireMachinesLock for custom-flannel-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:07.896597    5613 start.go:364] duration metric: took 24.916µs to acquireMachinesLock for "custom-flannel-692000"
	I0920 10:36:07.896608    5613 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:07.896633    5613 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:07.905351    5613 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:36:07.920725    5613 start.go:159] libmachine.API.Create for "custom-flannel-692000" (driver="qemu2")
	I0920 10:36:07.920755    5613 client.go:168] LocalClient.Create starting
	I0920 10:36:07.920825    5613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:07.920861    5613 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:07.920869    5613 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:07.920909    5613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:07.920932    5613 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:07.920940    5613 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:07.921357    5613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:08.084336    5613 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:08.253507    5613 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:08.253516    5613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:08.253755    5613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2
	I0920 10:36:08.263603    5613 main.go:141] libmachine: STDOUT: 
	I0920 10:36:08.263625    5613 main.go:141] libmachine: STDERR: 
	I0920 10:36:08.263685    5613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2 +20000M
	I0920 10:36:08.271738    5613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:08.271752    5613 main.go:141] libmachine: STDERR: 
	I0920 10:36:08.271771    5613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2
	I0920 10:36:08.271777    5613 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:08.271794    5613 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:08.271818    5613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:50:35:13:57:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2
	I0920 10:36:08.273517    5613 main.go:141] libmachine: STDOUT: 
	I0920 10:36:08.273532    5613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:08.273550    5613 client.go:171] duration metric: took 352.78975ms to LocalClient.Create
	I0920 10:36:10.273986    5613 start.go:128] duration metric: took 2.377352417s to createHost
	I0920 10:36:10.274029    5613 start.go:83] releasing machines lock for "custom-flannel-692000", held for 2.377438375s
	W0920 10:36:10.274074    5613 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:10.284597    5613 out.go:177] * Deleting "custom-flannel-692000" in qemu2 ...
	W0920 10:36:10.307456    5613 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:10.307476    5613 start.go:729] Will try again in 5 seconds ...
	I0920 10:36:15.309739    5613 start.go:360] acquireMachinesLock for custom-flannel-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:15.310327    5613 start.go:364] duration metric: took 475.25µs to acquireMachinesLock for "custom-flannel-692000"
	I0920 10:36:15.310401    5613 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:15.310667    5613 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:15.318402    5613 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:36:15.367710    5613 start.go:159] libmachine.API.Create for "custom-flannel-692000" (driver="qemu2")
	I0920 10:36:15.367770    5613 client.go:168] LocalClient.Create starting
	I0920 10:36:15.367924    5613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:15.367990    5613 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:15.368006    5613 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:15.368070    5613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:15.368117    5613 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:15.368134    5613 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:15.368688    5613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:15.542298    5613 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:15.641495    5613 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:15.641503    5613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:15.641711    5613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2
	I0920 10:36:15.651151    5613 main.go:141] libmachine: STDOUT: 
	I0920 10:36:15.651171    5613 main.go:141] libmachine: STDERR: 
	I0920 10:36:15.651229    5613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2 +20000M
	I0920 10:36:15.659418    5613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:15.659440    5613 main.go:141] libmachine: STDERR: 
	I0920 10:36:15.659455    5613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2
	I0920 10:36:15.659462    5613 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:15.659469    5613 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:15.659511    5613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:8b:b7:99:fb:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/custom-flannel-692000/disk.qcow2
	I0920 10:36:15.661143    5613 main.go:141] libmachine: STDOUT: 
	I0920 10:36:15.661157    5613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:15.661170    5613 client.go:171] duration metric: took 293.394792ms to LocalClient.Create
	I0920 10:36:17.663280    5613 start.go:128] duration metric: took 2.352605833s to createHost
	I0920 10:36:17.663336    5613 start.go:83] releasing machines lock for "custom-flannel-692000", held for 2.352996s
	W0920 10:36:17.663505    5613 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:17.673861    5613 out.go:201] 
	W0920 10:36:17.679949    5613 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:36:17.679975    5613 out.go:270] * 
	* 
	W0920 10:36:17.681197    5613 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:36:17.692900    5613 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.875756125s)

                                                
                                                
-- stdout --
	* [calico-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-692000" primary control-plane node in "calico-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:36:20.113151    5730 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:36:20.113279    5730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:20.113282    5730 out.go:358] Setting ErrFile to fd 2...
	I0920 10:36:20.113285    5730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:20.113428    5730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:36:20.114500    5730 out.go:352] Setting JSON to false
	I0920 10:36:20.131887    5730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3943,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:36:20.131959    5730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:36:20.136674    5730 out.go:177] * [calico-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:36:20.144712    5730 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:36:20.144765    5730 notify.go:220] Checking for updates...
	I0920 10:36:20.151682    5730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:36:20.154665    5730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:36:20.157697    5730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:36:20.160669    5730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:36:20.163726    5730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:36:20.166988    5730 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:36:20.167052    5730 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:36:20.167095    5730 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:36:20.171691    5730 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:36:20.178693    5730 start.go:297] selected driver: qemu2
	I0920 10:36:20.178698    5730 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:36:20.178703    5730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:36:20.180941    5730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:36:20.183674    5730 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:36:20.186811    5730 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:36:20.186835    5730 cni.go:84] Creating CNI manager for "calico"
	I0920 10:36:20.186838    5730 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0920 10:36:20.186884    5730 start.go:340] cluster config:
	{Name:calico-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:36:20.190381    5730 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:36:20.197660    5730 out.go:177] * Starting "calico-692000" primary control-plane node in "calico-692000" cluster
	I0920 10:36:20.201710    5730 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:36:20.201723    5730 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:36:20.201729    5730 cache.go:56] Caching tarball of preloaded images
	I0920 10:36:20.201790    5730 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:36:20.201795    5730 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:36:20.201842    5730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/calico-692000/config.json ...
	I0920 10:36:20.201852    5730 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/calico-692000/config.json: {Name:mkcc118207dd4a1857e0148d4c94076156ac1f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:36:20.202044    5730 start.go:360] acquireMachinesLock for calico-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:20.202074    5730 start.go:364] duration metric: took 24.666µs to acquireMachinesLock for "calico-692000"
	I0920 10:36:20.202085    5730 start.go:93] Provisioning new machine with config: &{Name:calico-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:20.202110    5730 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:20.210670    5730 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:36:20.226140    5730 start.go:159] libmachine.API.Create for "calico-692000" (driver="qemu2")
	I0920 10:36:20.226161    5730 client.go:168] LocalClient.Create starting
	I0920 10:36:20.226222    5730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:20.226254    5730 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:20.226265    5730 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:20.226305    5730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:20.226334    5730 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:20.226343    5730 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:20.226664    5730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:20.390510    5730 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:20.522272    5730 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:20.522279    5730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:20.522476    5730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2
	I0920 10:36:20.531808    5730 main.go:141] libmachine: STDOUT: 
	I0920 10:36:20.531823    5730 main.go:141] libmachine: STDERR: 
	I0920 10:36:20.531892    5730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2 +20000M
	I0920 10:36:20.539871    5730 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:20.539885    5730 main.go:141] libmachine: STDERR: 
	I0920 10:36:20.539909    5730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2
	I0920 10:36:20.539916    5730 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:20.539929    5730 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:20.539957    5730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:c2:0d:c6:70:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2
	I0920 10:36:20.541593    5730 main.go:141] libmachine: STDOUT: 
	I0920 10:36:20.541605    5730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:20.541626    5730 client.go:171] duration metric: took 315.461292ms to LocalClient.Create
	I0920 10:36:22.543837    5730 start.go:128] duration metric: took 2.341704958s to createHost
	I0920 10:36:22.543919    5730 start.go:83] releasing machines lock for "calico-692000", held for 2.341848459s
	W0920 10:36:22.544047    5730 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:22.551411    5730 out.go:177] * Deleting "calico-692000" in qemu2 ...
	W0920 10:36:22.583897    5730 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:22.583929    5730 start.go:729] Will try again in 5 seconds ...
	I0920 10:36:27.586121    5730 start.go:360] acquireMachinesLock for calico-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:27.586514    5730 start.go:364] duration metric: took 317.209µs to acquireMachinesLock for "calico-692000"
	I0920 10:36:27.586651    5730 start.go:93] Provisioning new machine with config: &{Name:calico-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:27.586865    5730 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:27.596342    5730 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:36:27.637642    5730 start.go:159] libmachine.API.Create for "calico-692000" (driver="qemu2")
	I0920 10:36:27.637701    5730 client.go:168] LocalClient.Create starting
	I0920 10:36:27.637827    5730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:27.637888    5730 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:27.637902    5730 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:27.637954    5730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:27.638001    5730 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:27.638016    5730 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:27.638688    5730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:27.807577    5730 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:27.894951    5730 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:27.894959    5730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:27.895168    5730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2
	I0920 10:36:27.904889    5730 main.go:141] libmachine: STDOUT: 
	I0920 10:36:27.904906    5730 main.go:141] libmachine: STDERR: 
	I0920 10:36:27.904985    5730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2 +20000M
	I0920 10:36:27.912991    5730 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:27.913014    5730 main.go:141] libmachine: STDERR: 
	I0920 10:36:27.913026    5730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2
	I0920 10:36:27.913032    5730 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:27.913042    5730 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:27.913080    5730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:bd:ea:cc:4b:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/calico-692000/disk.qcow2
	I0920 10:36:27.914817    5730 main.go:141] libmachine: STDOUT: 
	I0920 10:36:27.914831    5730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:27.914843    5730 client.go:171] duration metric: took 277.138333ms to LocalClient.Create
	I0920 10:36:29.916929    5730 start.go:128] duration metric: took 2.330048125s to createHost
	I0920 10:36:29.916960    5730 start.go:83] releasing machines lock for "calico-692000", held for 2.330446333s
	W0920 10:36:29.917103    5730 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:29.930362    5730 out.go:201] 
	W0920 10:36:29.933427    5730 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:36:29.933435    5730 out.go:270] * 
	* 
	W0920 10:36:29.934169    5730 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:36:29.950384    5730 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-692000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.89268975s)

                                                
                                                
-- stdout --
	* [false-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-692000" primary control-plane node in "false-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:36:32.347704    5847 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:36:32.347823    5847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:32.347827    5847 out.go:358] Setting ErrFile to fd 2...
	I0920 10:36:32.347829    5847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:32.347958    5847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:36:32.348995    5847 out.go:352] Setting JSON to false
	I0920 10:36:32.365711    5847 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3955,"bootTime":1726849837,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:36:32.365780    5847 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:36:32.372124    5847 out.go:177] * [false-692000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:36:32.380084    5847 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:36:32.380133    5847 notify.go:220] Checking for updates...
	I0920 10:36:32.386059    5847 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:36:32.389083    5847 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:36:32.392096    5847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:36:32.395071    5847 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:36:32.398071    5847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:36:32.401389    5847 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:36:32.401455    5847 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:36:32.401501    5847 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:36:32.406061    5847 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:36:32.413071    5847 start.go:297] selected driver: qemu2
	I0920 10:36:32.413077    5847 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:36:32.413083    5847 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:36:32.415284    5847 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:36:32.418050    5847 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:36:32.421147    5847 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:36:32.421164    5847 cni.go:84] Creating CNI manager for "false"
	I0920 10:36:32.421188    5847 start.go:340] cluster config:
	{Name:false-692000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:36:32.424487    5847 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:36:32.432066    5847 out.go:177] * Starting "false-692000" primary control-plane node in "false-692000" cluster
	I0920 10:36:32.436089    5847 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:36:32.436104    5847 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:36:32.436113    5847 cache.go:56] Caching tarball of preloaded images
	I0920 10:36:32.436163    5847 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:36:32.436168    5847 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:36:32.436218    5847 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/false-692000/config.json ...
	I0920 10:36:32.436227    5847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/false-692000/config.json: {Name:mk407647bbbf1658c10366cdc794bc80757df0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:36:32.436428    5847 start.go:360] acquireMachinesLock for false-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:32.436457    5847 start.go:364] duration metric: took 23.917µs to acquireMachinesLock for "false-692000"
	I0920 10:36:32.436468    5847 start.go:93] Provisioning new machine with config: &{Name:false-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:32.436489    5847 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:32.445066    5847 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:36:32.460830    5847 start.go:159] libmachine.API.Create for "false-692000" (driver="qemu2")
	I0920 10:36:32.460859    5847 client.go:168] LocalClient.Create starting
	I0920 10:36:32.460925    5847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:32.460958    5847 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:32.460967    5847 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:32.461002    5847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:32.461028    5847 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:32.461038    5847 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:32.461382    5847 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:32.626236    5847 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:32.799947    5847 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:32.799958    5847 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:32.800173    5847 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2
	I0920 10:36:32.809529    5847 main.go:141] libmachine: STDOUT: 
	I0920 10:36:32.809548    5847 main.go:141] libmachine: STDERR: 
	I0920 10:36:32.809605    5847 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2 +20000M
	I0920 10:36:32.818009    5847 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:32.818032    5847 main.go:141] libmachine: STDERR: 
	I0920 10:36:32.818048    5847 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2
	I0920 10:36:32.818054    5847 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:32.818067    5847 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:32.818100    5847 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:13:72:03:67:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2
	I0920 10:36:32.819846    5847 main.go:141] libmachine: STDOUT: 
	I0920 10:36:32.819860    5847 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:32.819879    5847 client.go:171] duration metric: took 359.015542ms to LocalClient.Create
	I0920 10:36:34.820730    5847 start.go:128] duration metric: took 2.384240958s to createHost
	I0920 10:36:34.820772    5847 start.go:83] releasing machines lock for "false-692000", held for 2.384321625s
	W0920 10:36:34.820831    5847 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:34.838028    5847 out.go:177] * Deleting "false-692000" in qemu2 ...
	W0920 10:36:34.864294    5847 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:34.864307    5847 start.go:729] Will try again in 5 seconds ...
	I0920 10:36:39.866440    5847 start.go:360] acquireMachinesLock for false-692000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:39.866735    5847 start.go:364] duration metric: took 243.625µs to acquireMachinesLock for "false-692000"
	I0920 10:36:39.866798    5847 start.go:93] Provisioning new machine with config: &{Name:false-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:39.866902    5847 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:39.876207    5847 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:36:39.902029    5847 start.go:159] libmachine.API.Create for "false-692000" (driver="qemu2")
	I0920 10:36:39.902069    5847 client.go:168] LocalClient.Create starting
	I0920 10:36:39.902154    5847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:39.902199    5847 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:39.902210    5847 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:39.902255    5847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:39.902286    5847 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:39.902294    5847 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:39.902757    5847 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:40.066803    5847 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:40.148740    5847 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:40.148747    5847 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:40.148965    5847 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2
	I0920 10:36:40.158660    5847 main.go:141] libmachine: STDOUT: 
	I0920 10:36:40.158678    5847 main.go:141] libmachine: STDERR: 
	I0920 10:36:40.158740    5847 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2 +20000M
	I0920 10:36:40.167054    5847 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:40.167081    5847 main.go:141] libmachine: STDERR: 
	I0920 10:36:40.167094    5847 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2
	I0920 10:36:40.167099    5847 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:40.167107    5847 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:40.167135    5847 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:ea:ee:ec:74:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/false-692000/disk.qcow2
	I0920 10:36:40.168864    5847 main.go:141] libmachine: STDOUT: 
	I0920 10:36:40.168878    5847 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:40.168889    5847 client.go:171] duration metric: took 266.81525ms to LocalClient.Create
	I0920 10:36:42.171078    5847 start.go:128] duration metric: took 2.304155834s to createHost
	I0920 10:36:42.171143    5847 start.go:83] releasing machines lock for "false-692000", held for 2.304405292s
	W0920 10:36:42.171463    5847 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:42.184077    5847 out.go:201] 
	W0920 10:36:42.189114    5847 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:36:42.189134    5847 out.go:270] * 
	* 
	W0920 10:36:42.196670    5847 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:36:42.201381    5847 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-305000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-305000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.84302s)

                                                
                                                
-- stdout --
	* [old-k8s-version-305000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-305000" primary control-plane node in "old-k8s-version-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:36:44.406228    5960 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:36:44.406361    5960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:44.406364    5960 out.go:358] Setting ErrFile to fd 2...
	I0920 10:36:44.406366    5960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:44.406518    5960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:36:44.407718    5960 out.go:352] Setting JSON to false
	I0920 10:36:44.424176    5960 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3967,"bootTime":1726849837,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:36:44.424269    5960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:36:44.431495    5960 out.go:177] * [old-k8s-version-305000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:36:44.439396    5960 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:36:44.439438    5960 notify.go:220] Checking for updates...
	I0920 10:36:44.445350    5960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:36:44.448335    5960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:36:44.451358    5960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:36:44.454292    5960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:36:44.457351    5960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:36:44.460631    5960 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:36:44.460705    5960 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:36:44.460763    5960 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:36:44.465354    5960 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:36:44.472270    5960 start.go:297] selected driver: qemu2
	I0920 10:36:44.472277    5960 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:36:44.472283    5960 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:36:44.474767    5960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:36:44.477335    5960 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:36:44.480408    5960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:36:44.480425    5960 cni.go:84] Creating CNI manager for ""
	I0920 10:36:44.480451    5960 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:36:44.480491    5960 start.go:340] cluster config:
	{Name:old-k8s-version-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:36:44.484049    5960 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:36:44.491329    5960 out.go:177] * Starting "old-k8s-version-305000" primary control-plane node in "old-k8s-version-305000" cluster
	I0920 10:36:44.495208    5960 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:36:44.495225    5960 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:36:44.495237    5960 cache.go:56] Caching tarball of preloaded images
	I0920 10:36:44.495312    5960 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:36:44.495324    5960 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:36:44.495384    5960 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/old-k8s-version-305000/config.json ...
	I0920 10:36:44.495396    5960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/old-k8s-version-305000/config.json: {Name:mkdf801ab059db064731cdc643099f39382cccdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:36:44.495823    5960 start.go:360] acquireMachinesLock for old-k8s-version-305000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:44.495861    5960 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "old-k8s-version-305000"
	I0920 10:36:44.495875    5960 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:44.495899    5960 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:44.504344    5960 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:36:44.521930    5960 start.go:159] libmachine.API.Create for "old-k8s-version-305000" (driver="qemu2")
	I0920 10:36:44.521960    5960 client.go:168] LocalClient.Create starting
	I0920 10:36:44.522025    5960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:44.522062    5960 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:44.522072    5960 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:44.522114    5960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:44.522138    5960 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:44.522145    5960 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:44.522611    5960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:44.697593    5960 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:44.815265    5960 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:44.815277    5960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:44.815495    5960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2
	I0920 10:36:44.824910    5960 main.go:141] libmachine: STDOUT: 
	I0920 10:36:44.824931    5960 main.go:141] libmachine: STDERR: 
	I0920 10:36:44.825007    5960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2 +20000M
	I0920 10:36:44.833084    5960 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:44.833099    5960 main.go:141] libmachine: STDERR: 
	I0920 10:36:44.833117    5960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2
	I0920 10:36:44.833122    5960 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:44.833139    5960 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:44.833165    5960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:09:81:09:04:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2
	I0920 10:36:44.834816    5960 main.go:141] libmachine: STDOUT: 
	I0920 10:36:44.834828    5960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:44.834848    5960 client.go:171] duration metric: took 312.883917ms to LocalClient.Create
	I0920 10:36:46.836955    5960 start.go:128] duration metric: took 2.341050375s to createHost
	I0920 10:36:46.837026    5960 start.go:83] releasing machines lock for "old-k8s-version-305000", held for 2.341169291s
	W0920 10:36:46.837064    5960 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:46.853512    5960 out.go:177] * Deleting "old-k8s-version-305000" in qemu2 ...
	W0920 10:36:46.877814    5960 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:46.877829    5960 start.go:729] Will try again in 5 seconds ...
	I0920 10:36:51.880165    5960 start.go:360] acquireMachinesLock for old-k8s-version-305000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:51.880567    5960 start.go:364] duration metric: took 315.042µs to acquireMachinesLock for "old-k8s-version-305000"
	I0920 10:36:51.880668    5960 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:51.880860    5960 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:51.890365    5960 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:36:51.932030    5960 start.go:159] libmachine.API.Create for "old-k8s-version-305000" (driver="qemu2")
	I0920 10:36:51.932075    5960 client.go:168] LocalClient.Create starting
	I0920 10:36:51.932183    5960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:36:51.932248    5960 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:51.932264    5960 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:51.932330    5960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:36:51.932374    5960 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:51.932383    5960 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:51.933048    5960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:36:52.103450    5960 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:52.158607    5960 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:52.158614    5960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:52.158820    5960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2
	I0920 10:36:52.168367    5960 main.go:141] libmachine: STDOUT: 
	I0920 10:36:52.168406    5960 main.go:141] libmachine: STDERR: 
	I0920 10:36:52.168468    5960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2 +20000M
	I0920 10:36:52.176604    5960 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:52.176629    5960 main.go:141] libmachine: STDERR: 
	I0920 10:36:52.176652    5960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2
	I0920 10:36:52.176658    5960 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:52.176669    5960 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:52.176706    5960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e8:d5:1d:e3:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2
	I0920 10:36:52.178400    5960 main.go:141] libmachine: STDOUT: 
	I0920 10:36:52.178425    5960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:52.178438    5960 client.go:171] duration metric: took 246.358875ms to LocalClient.Create
	I0920 10:36:54.180516    5960 start.go:128] duration metric: took 2.299639583s to createHost
	I0920 10:36:54.180548    5960 start.go:83] releasing machines lock for "old-k8s-version-305000", held for 2.299981041s
	W0920 10:36:54.180681    5960 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:54.191121    5960 out.go:201] 
	W0920 10:36:54.198132    5960 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:36:54.198137    5960 out.go:270] * 
	* 
	W0920 10:36:54.198623    5960 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:36:54.209150    5960 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-305000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (33.91425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-305000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-305000 create -f testdata/busybox.yaml: exit status 1 (28.973459ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-305000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-305000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (35.998792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (34.908375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-305000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-305000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-305000 describe deploy/metrics-server -n kube-system: exit status 1 (28.033083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-305000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-305000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (32.017375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-305000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
E0920 10:36:59.389044    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-305000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.193905375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-305000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-305000" primary control-plane node in "old-k8s-version-305000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:36:56.403569    6020 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:36:56.403699    6020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:56.403702    6020 out.go:358] Setting ErrFile to fd 2...
	I0920 10:36:56.403705    6020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:56.403835    6020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:36:56.404862    6020 out.go:352] Setting JSON to false
	I0920 10:36:56.421287    6020 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3979,"bootTime":1726849837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:36:56.421362    6020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:36:56.426491    6020 out.go:177] * [old-k8s-version-305000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:36:56.433536    6020 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:36:56.433600    6020 notify.go:220] Checking for updates...
	I0920 10:36:56.440440    6020 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:36:56.443456    6020 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:36:56.446491    6020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:36:56.449454    6020 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:36:56.452467    6020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:36:56.455739    6020 config.go:182] Loaded profile config "old-k8s-version-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 10:36:56.459489    6020 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:36:56.462423    6020 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:36:56.466481    6020 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:36:56.473485    6020 start.go:297] selected driver: qemu2
	I0920 10:36:56.473492    6020 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:36:56.473557    6020 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:36:56.475952    6020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:36:56.475980    6020 cni.go:84] Creating CNI manager for ""
	I0920 10:36:56.476007    6020 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:36:56.476040    6020 start.go:340] cluster config:
	{Name:old-k8s-version-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-305000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:36:56.479683    6020 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:36:56.487499    6020 out.go:177] * Starting "old-k8s-version-305000" primary control-plane node in "old-k8s-version-305000" cluster
	I0920 10:36:56.491522    6020 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:36:56.491537    6020 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:36:56.491548    6020 cache.go:56] Caching tarball of preloaded images
	I0920 10:36:56.491616    6020 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:36:56.491622    6020 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:36:56.491676    6020 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/old-k8s-version-305000/config.json ...
	I0920 10:36:56.492236    6020 start.go:360] acquireMachinesLock for old-k8s-version-305000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:56.492269    6020 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "old-k8s-version-305000"
	I0920 10:36:56.492282    6020 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:36:56.492287    6020 fix.go:54] fixHost starting: 
	I0920 10:36:56.492403    6020 fix.go:112] recreateIfNeeded on old-k8s-version-305000: state=Stopped err=<nil>
	W0920 10:36:56.492412    6020 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:36:56.496459    6020 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-305000" ...
	I0920 10:36:56.503441    6020 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:56.503482    6020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e8:d5:1d:e3:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2
	I0920 10:36:56.505452    6020 main.go:141] libmachine: STDOUT: 
	I0920 10:36:56.505473    6020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:56.505518    6020 fix.go:56] duration metric: took 13.229334ms for fixHost
	I0920 10:36:56.505522    6020 start.go:83] releasing machines lock for "old-k8s-version-305000", held for 13.248625ms
	W0920 10:36:56.505529    6020 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:36:56.505568    6020 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:56.505572    6020 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:01.507859    6020 start.go:360] acquireMachinesLock for old-k8s-version-305000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:01.508408    6020 start.go:364] duration metric: took 424.125µs to acquireMachinesLock for "old-k8s-version-305000"
	I0920 10:37:01.508502    6020 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:01.508522    6020 fix.go:54] fixHost starting: 
	I0920 10:37:01.509313    6020 fix.go:112] recreateIfNeeded on old-k8s-version-305000: state=Stopped err=<nil>
	W0920 10:37:01.509343    6020 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:01.518833    6020 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-305000" ...
	I0920 10:37:01.521851    6020 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:01.522122    6020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e8:d5:1d:e3:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/old-k8s-version-305000/disk.qcow2
	I0920 10:37:01.531843    6020 main.go:141] libmachine: STDOUT: 
	I0920 10:37:01.531897    6020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:01.531983    6020 fix.go:56] duration metric: took 23.46275ms for fixHost
	I0920 10:37:01.532000    6020 start.go:83] releasing machines lock for "old-k8s-version-305000", held for 23.568583ms
	W0920 10:37:01.532181    6020 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:01.540799    6020 out.go:201] 
	W0920 10:37:01.544997    6020 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:01.545027    6020 out.go:270] * 
	* 
	W0920 10:37:01.547612    6020 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:01.555759    6020 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-305000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (65.377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-305000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (32.781375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-305000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-305000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-305000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.879875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-305000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-305000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (29.027542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-305000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (30.012959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-305000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-305000 --alsologtostderr -v=1: exit status 83 (41.595958ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-305000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-305000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:01.825081    6039 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:01.826114    6039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:01.826118    6039 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:01.826120    6039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:01.826250    6039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:01.826458    6039 out.go:352] Setting JSON to false
	I0920 10:37:01.826469    6039 mustload.go:65] Loading cluster: old-k8s-version-305000
	I0920 10:37:01.826711    6039 config.go:182] Loaded profile config "old-k8s-version-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 10:37:01.831395    6039 out.go:177] * The control-plane node old-k8s-version-305000 host is not running: state=Stopped
	I0920 10:37:01.834424    6039 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-305000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-305000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (30.288625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (29.894125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-266000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-266000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.781426375s)

                                                
                                                
-- stdout --
	* [no-preload-266000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-266000" primary control-plane node in "no-preload-266000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-266000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:02.154555    6056 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:02.154673    6056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:02.154678    6056 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:02.154681    6056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:02.154811    6056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:02.155883    6056 out.go:352] Setting JSON to false
	I0920 10:37:02.172478    6056 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3985,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:37:02.172580    6056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:37:02.177274    6056 out.go:177] * [no-preload-266000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:37:02.184250    6056 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:37:02.184283    6056 notify.go:220] Checking for updates...
	I0920 10:37:02.192125    6056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:37:02.195141    6056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:37:02.198211    6056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:37:02.201221    6056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:37:02.204185    6056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:37:02.207517    6056 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:02.207584    6056 config.go:182] Loaded profile config "stopped-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:37:02.207624    6056 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:37:02.212194    6056 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:37:02.219190    6056 start.go:297] selected driver: qemu2
	I0920 10:37:02.219196    6056 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:37:02.219203    6056 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:37:02.221467    6056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:37:02.225154    6056 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:37:02.228264    6056 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:37:02.228295    6056 cni.go:84] Creating CNI manager for ""
	I0920 10:37:02.228317    6056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:37:02.228321    6056 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:37:02.228346    6056 start.go:340] cluster config:
	{Name:no-preload-266000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:02.231727    6056 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.237116    6056 out.go:177] * Starting "no-preload-266000" primary control-plane node in "no-preload-266000" cluster
	I0920 10:37:02.241144    6056 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:37:02.241203    6056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/no-preload-266000/config.json ...
	I0920 10:37:02.241217    6056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/no-preload-266000/config.json: {Name:mk16cda0c98db69cf8130a72892a2e8c83833f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:37:02.241243    6056 cache.go:107] acquiring lock: {Name:mkacf24150ca8700e072bc1e4826c6eda27d387a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.241251    6056 cache.go:107] acquiring lock: {Name:mkf3a17fb7edba2f6d9f0b5de338a2d6bf098be2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.241256    6056 cache.go:107] acquiring lock: {Name:mk279c973ea680b877a345f04baa733284a7de43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.241359    6056 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 10:37:02.241367    6056 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 119.25µs
	I0920 10:37:02.241373    6056 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 10:37:02.241379    6056 cache.go:107] acquiring lock: {Name:mk8afafee00f395edec1a71df7e9f70463227624 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.241383    6056 cache.go:107] acquiring lock: {Name:mka955c530d45047c78e07f7d7967f8cfad83c9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.241421    6056 cache.go:107] acquiring lock: {Name:mk33a39fb9833594af5780895c19072d0b484822 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.241450    6056 start.go:360] acquireMachinesLock for no-preload-266000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:02.241467    6056 cache.go:107] acquiring lock: {Name:mk66002edd85f78f5e094f0733ff88df50cec4e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.241480    6056 cache.go:107] acquiring lock: {Name:mkf2a5ca361888fdac3ab66573f1cf7f42382bf2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:02.241503    6056 start.go:364] duration metric: took 48µs to acquireMachinesLock for "no-preload-266000"
	I0920 10:37:02.241579    6056 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 10:37:02.241611    6056 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 10:37:02.241639    6056 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 10:37:02.241617    6056 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 10:37:02.241548    6056 start.go:93] Provisioning new machine with config: &{Name:no-preload-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:37:02.241665    6056 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:37:02.241619    6056 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 10:37:02.241735    6056 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 10:37:02.241623    6056 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 10:37:02.246161    6056 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:37:02.253721    6056 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 10:37:02.254063    6056 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 10:37:02.254721    6056 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 10:37:02.254776    6056 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 10:37:02.254815    6056 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 10:37:02.256060    6056 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 10:37:02.256062    6056 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 10:37:02.262064    6056 start.go:159] libmachine.API.Create for "no-preload-266000" (driver="qemu2")
	I0920 10:37:02.262081    6056 client.go:168] LocalClient.Create starting
	I0920 10:37:02.262162    6056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:37:02.262198    6056 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:02.262208    6056 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:02.262244    6056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:37:02.262267    6056 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:02.262280    6056 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:02.262724    6056 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:37:02.430210    6056 main.go:141] libmachine: Creating SSH key...
	I0920 10:37:02.489875    6056 main.go:141] libmachine: Creating Disk image...
	I0920 10:37:02.489893    6056 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:37:02.490079    6056 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2
	I0920 10:37:02.499898    6056 main.go:141] libmachine: STDOUT: 
	I0920 10:37:02.499921    6056 main.go:141] libmachine: STDERR: 
	I0920 10:37:02.499989    6056 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2 +20000M
	I0920 10:37:02.509033    6056 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:37:02.509058    6056 main.go:141] libmachine: STDERR: 
	I0920 10:37:02.509086    6056 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2
	I0920 10:37:02.509092    6056 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:37:02.509107    6056 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:02.509134    6056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:df:b7:c8:db:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2
	I0920 10:37:02.511034    6056 main.go:141] libmachine: STDOUT: 
	I0920 10:37:02.511052    6056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:02.511071    6056 client.go:171] duration metric: took 248.985ms to LocalClient.Create
	I0920 10:37:02.661020    6056 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 10:37:02.668753    6056 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0920 10:37:02.683678    6056 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 10:37:02.703862    6056 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 10:37:02.712036    6056 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0920 10:37:02.721613    6056 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 10:37:02.761184    6056 cache.go:162] opening:  /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 10:37:02.814628    6056 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0920 10:37:02.814640    6056 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 573.337209ms
	I0920 10:37:02.814650    6056 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0920 10:37:04.511204    6056 start.go:128] duration metric: took 2.269542417s to createHost
	I0920 10:37:04.511219    6056 start.go:83] releasing machines lock for "no-preload-266000", held for 2.26972225s
	W0920 10:37:04.511231    6056 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:04.526739    6056 out.go:177] * Deleting "no-preload-266000" in qemu2 ...
	W0920 10:37:04.538500    6056 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:04.538508    6056 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:04.958742    6056 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0920 10:37:04.958757    6056 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 2.717344208s
	I0920 10:37:04.958763    6056 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0920 10:37:06.449655    6056 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0920 10:37:06.449672    6056 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.208247167s
	I0920 10:37:06.449681    6056 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0920 10:37:06.608971    6056 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0920 10:37:06.608993    6056 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.3677675s
	I0920 10:37:06.609004    6056 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0920 10:37:07.398837    6056 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0920 10:37:07.398864    6056 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.157652208s
	I0920 10:37:07.398877    6056 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0920 10:37:07.813683    6056 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0920 10:37:07.813721    6056 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 5.57233925s
	I0920 10:37:07.813741    6056 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0920 10:37:09.539547    6056 start.go:360] acquireMachinesLock for no-preload-266000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:09.540093    6056 start.go:364] duration metric: took 464.042µs to acquireMachinesLock for "no-preload-266000"
	I0920 10:37:09.540223    6056 start.go:93] Provisioning new machine with config: &{Name:no-preload-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:37:09.540450    6056 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:37:09.551077    6056 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:37:09.594975    6056 start.go:159] libmachine.API.Create for "no-preload-266000" (driver="qemu2")
	I0920 10:37:09.595024    6056 client.go:168] LocalClient.Create starting
	I0920 10:37:09.595122    6056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:37:09.595184    6056 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:09.595202    6056 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:09.595267    6056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:37:09.595307    6056 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:09.595322    6056 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:09.595801    6056 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:37:09.781162    6056 main.go:141] libmachine: Creating SSH key...
	I0920 10:37:09.846463    6056 main.go:141] libmachine: Creating Disk image...
	I0920 10:37:09.846478    6056 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:37:09.846715    6056 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2
	I0920 10:37:09.856426    6056 main.go:141] libmachine: STDOUT: 
	I0920 10:37:09.856446    6056 main.go:141] libmachine: STDERR: 
	I0920 10:37:09.856521    6056 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2 +20000M
	I0920 10:37:09.864893    6056 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:37:09.864909    6056 main.go:141] libmachine: STDERR: 
	I0920 10:37:09.864922    6056 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2
	I0920 10:37:09.864928    6056 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:37:09.864938    6056 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:09.864988    6056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:8d:ff:b2:f3:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2
	I0920 10:37:09.866700    6056 main.go:141] libmachine: STDOUT: 
	I0920 10:37:09.866721    6056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:09.866735    6056 client.go:171] duration metric: took 271.707125ms to LocalClient.Create
	I0920 10:37:10.977237    6056 cache.go:157] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0920 10:37:10.977304    6056 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.735965042s
	I0920 10:37:10.977330    6056 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0920 10:37:10.977382    6056 cache.go:87] Successfully saved all images to host disk.
	I0920 10:37:11.868940    6056 start.go:128] duration metric: took 2.32847s to createHost
	I0920 10:37:11.869031    6056 start.go:83] releasing machines lock for "no-preload-266000", held for 2.328923958s
	W0920 10:37:11.869359    6056 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-266000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-266000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:11.883896    6056 out.go:201] 
	W0920 10:37:11.889057    6056 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:11.889097    6056 out.go:270] * 
	* 
	W0920 10:37:11.891320    6056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:11.899855    6056 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-266000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (52.340833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-358000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-358000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.397430167s)

                                                
                                                
-- stdout --
	* [embed-certs-358000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-358000" primary control-plane node in "embed-certs-358000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-358000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:10.447456    6101 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:10.447577    6101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:10.447581    6101 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:10.447583    6101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:10.447713    6101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:10.448717    6101 out.go:352] Setting JSON to false
	I0920 10:37:10.464905    6101 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3993,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:37:10.464972    6101 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:37:10.469424    6101 out.go:177] * [embed-certs-358000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:37:10.477257    6101 notify.go:220] Checking for updates...
	I0920 10:37:10.481209    6101 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:37:10.488175    6101 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:37:10.496246    6101 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:37:10.504192    6101 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:37:10.512199    6101 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:37:10.520242    6101 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:37:10.534692    6101 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:10.534770    6101 config.go:182] Loaded profile config "no-preload-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:10.534823    6101 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:37:10.539226    6101 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:37:10.547193    6101 start.go:297] selected driver: qemu2
	I0920 10:37:10.547198    6101 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:37:10.547204    6101 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:37:10.549887    6101 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:37:10.554236    6101 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:37:10.558330    6101 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:37:10.558368    6101 cni.go:84] Creating CNI manager for ""
	I0920 10:37:10.558393    6101 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:37:10.558398    6101 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:37:10.558423    6101 start.go:340] cluster config:
	{Name:embed-certs-358000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-358000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:10.562513    6101 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:10.579130    6101 out.go:177] * Starting "embed-certs-358000" primary control-plane node in "embed-certs-358000" cluster
	I0920 10:37:10.583229    6101 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:37:10.583246    6101 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:37:10.583254    6101 cache.go:56] Caching tarball of preloaded images
	I0920 10:37:10.583330    6101 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:37:10.583344    6101 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:37:10.583412    6101 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/embed-certs-358000/config.json ...
	I0920 10:37:10.583424    6101 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/embed-certs-358000/config.json: {Name:mk645e48ce2e94bba8ed983a0ec83f0ab6cc346c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:37:10.583850    6101 start.go:360] acquireMachinesLock for embed-certs-358000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:11.869205    6101 start.go:364] duration metric: took 1.285340333s to acquireMachinesLock for "embed-certs-358000"
	I0920 10:37:11.869406    6101 start.go:93] Provisioning new machine with config: &{Name:embed-certs-358000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-358000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:37:11.869656    6101 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:37:11.879917    6101 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:37:11.930629    6101 start.go:159] libmachine.API.Create for "embed-certs-358000" (driver="qemu2")
	I0920 10:37:11.930708    6101 client.go:168] LocalClient.Create starting
	I0920 10:37:11.930810    6101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:37:11.930866    6101 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:11.930882    6101 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:11.930942    6101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:37:11.930999    6101 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:11.931017    6101 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:11.931693    6101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:37:12.130439    6101 main.go:141] libmachine: Creating SSH key...
	I0920 10:37:12.358420    6101 main.go:141] libmachine: Creating Disk image...
	I0920 10:37:12.358428    6101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:37:12.358605    6101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2
	I0920 10:37:12.368068    6101 main.go:141] libmachine: STDOUT: 
	I0920 10:37:12.368087    6101 main.go:141] libmachine: STDERR: 
	I0920 10:37:12.368143    6101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2 +20000M
	I0920 10:37:12.376027    6101 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:37:12.376042    6101 main.go:141] libmachine: STDERR: 
	I0920 10:37:12.376058    6101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2
	I0920 10:37:12.376065    6101 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:37:12.376076    6101 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:12.376101    6101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4c:b4:40:52:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2
	I0920 10:37:12.377794    6101 main.go:141] libmachine: STDOUT: 
	I0920 10:37:12.377808    6101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:12.377826    6101 client.go:171] duration metric: took 447.112042ms to LocalClient.Create
	I0920 10:37:14.380095    6101 start.go:128] duration metric: took 2.510414333s to createHost
	I0920 10:37:14.380149    6101 start.go:83] releasing machines lock for "embed-certs-358000", held for 2.510892708s
	W0920 10:37:14.380208    6101 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:14.387452    6101 out.go:177] * Deleting "embed-certs-358000" in qemu2 ...
	W0920 10:37:14.422234    6101 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:14.422256    6101 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:19.424405    6101 start.go:360] acquireMachinesLock for embed-certs-358000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:19.424852    6101 start.go:364] duration metric: took 357.959µs to acquireMachinesLock for "embed-certs-358000"
	I0920 10:37:19.425012    6101 start.go:93] Provisioning new machine with config: &{Name:embed-certs-358000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-358000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:37:19.425306    6101 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:37:19.430918    6101 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:37:19.481340    6101 start.go:159] libmachine.API.Create for "embed-certs-358000" (driver="qemu2")
	I0920 10:37:19.481398    6101 client.go:168] LocalClient.Create starting
	I0920 10:37:19.481516    6101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:37:19.481579    6101 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:19.481594    6101 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:19.481679    6101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:37:19.481724    6101 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:19.481740    6101 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:19.482255    6101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:37:19.654870    6101 main.go:141] libmachine: Creating SSH key...
	I0920 10:37:19.729408    6101 main.go:141] libmachine: Creating Disk image...
	I0920 10:37:19.729414    6101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:37:19.729613    6101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2
	I0920 10:37:19.739195    6101 main.go:141] libmachine: STDOUT: 
	I0920 10:37:19.739217    6101 main.go:141] libmachine: STDERR: 
	I0920 10:37:19.739270    6101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2 +20000M
	I0920 10:37:19.747144    6101 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:37:19.747161    6101 main.go:141] libmachine: STDERR: 
	I0920 10:37:19.747176    6101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2
	I0920 10:37:19.747183    6101 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:37:19.747192    6101 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:19.747230    6101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:33:64:e9:14:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2
	I0920 10:37:19.748862    6101 main.go:141] libmachine: STDOUT: 
	I0920 10:37:19.748875    6101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:19.748888    6101 client.go:171] duration metric: took 267.484167ms to LocalClient.Create
	I0920 10:37:21.751152    6101 start.go:128] duration metric: took 2.325800167s to createHost
	I0920 10:37:21.751224    6101 start.go:83] releasing machines lock for "embed-certs-358000", held for 2.326360459s
	W0920 10:37:21.751616    6101 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-358000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-358000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:21.768325    6101 out.go:201] 
	W0920 10:37:21.781388    6101 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:21.781419    6101 out.go:270] * 
	* 
	W0920 10:37:21.783851    6101 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:21.792229    6101 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-358000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (67.258125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-266000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-266000 create -f testdata/busybox.yaml: exit status 1 (30.897209ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-266000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-266000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (33.425292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (33.2875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-266000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-266000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-266000 describe deploy/metrics-server -n kube-system: exit status 1 (27.585042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-266000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-266000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (30.480958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-266000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-266000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.113639375s)

                                                
                                                
-- stdout --
	* [no-preload-266000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-266000" primary control-plane node in "no-preload-266000" cluster
	* Restarting existing qemu2 VM for "no-preload-266000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-266000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:15.756636    6147 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:15.756807    6147 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:15.756811    6147 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:15.756813    6147 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:15.756944    6147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:15.757970    6147 out.go:352] Setting JSON to false
	I0920 10:37:15.773858    6147 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3998,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:37:15.773938    6147 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:37:15.779377    6147 out.go:177] * [no-preload-266000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:37:15.787280    6147 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:37:15.787383    6147 notify.go:220] Checking for updates...
	I0920 10:37:15.794298    6147 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:37:15.797295    6147 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:37:15.800393    6147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:37:15.803262    6147 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:37:15.806288    6147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:37:15.808168    6147 config.go:182] Loaded profile config "no-preload-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:15.808467    6147 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:37:15.813287    6147 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:37:15.820116    6147 start.go:297] selected driver: qemu2
	I0920 10:37:15.820124    6147 start.go:901] validating driver "qemu2" against &{Name:no-preload-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:15.820194    6147 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:37:15.822542    6147 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:37:15.822572    6147 cni.go:84] Creating CNI manager for ""
	I0920 10:37:15.822593    6147 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:37:15.822622    6147 start.go:340] cluster config:
	{Name:no-preload-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-266000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:15.826244    6147 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.833279    6147 out.go:177] * Starting "no-preload-266000" primary control-plane node in "no-preload-266000" cluster
	I0920 10:37:15.837259    6147 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:37:15.837356    6147 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/no-preload-266000/config.json ...
	I0920 10:37:15.837381    6147 cache.go:107] acquiring lock: {Name:mkf2a5ca361888fdac3ab66573f1cf7f42382bf2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.837386    6147 cache.go:107] acquiring lock: {Name:mkacf24150ca8700e072bc1e4826c6eda27d387a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.837404    6147 cache.go:107] acquiring lock: {Name:mk33a39fb9833594af5780895c19072d0b484822 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.837456    6147 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0920 10:37:15.837465    6147 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 87.25µs
	I0920 10:37:15.837473    6147 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0920 10:37:15.837456    6147 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0920 10:37:15.837464    6147 cache.go:107] acquiring lock: {Name:mk66002edd85f78f5e094f0733ff88df50cec4e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.837482    6147 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 78.5µs
	I0920 10:37:15.837486    6147 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0920 10:37:15.837465    6147 cache.go:107] acquiring lock: {Name:mk279c973ea680b877a345f04baa733284a7de43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.837381    6147 cache.go:107] acquiring lock: {Name:mkf3a17fb7edba2f6d9f0b5de338a2d6bf098be2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.837515    6147 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0920 10:37:15.837519    6147 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 56.167µs
	I0920 10:37:15.837521    6147 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0920 10:37:15.837523    6147 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0920 10:37:15.837459    6147 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0920 10:37:15.837525    6147 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 60.708µs
	I0920 10:37:15.837528    6147 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0920 10:37:15.837528    6147 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 147.875µs
	I0920 10:37:15.837531    6147 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0920 10:37:15.837531    6147 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 10:37:15.837550    6147 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 172.583µs
	I0920 10:37:15.837556    6147 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 10:37:15.837584    6147 cache.go:107] acquiring lock: {Name:mka955c530d45047c78e07f7d7967f8cfad83c9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.837585    6147 cache.go:107] acquiring lock: {Name:mk8afafee00f395edec1a71df7e9f70463227624 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:15.837650    6147 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0920 10:37:15.837658    6147 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 132.583µs
	I0920 10:37:15.837651    6147 cache.go:115] /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0920 10:37:15.837667    6147 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 154.667µs
	I0920 10:37:15.837673    6147 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0920 10:37:15.837663    6147 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0920 10:37:15.837680    6147 cache.go:87] Successfully saved all images to host disk.
	I0920 10:37:15.837803    6147 start.go:360] acquireMachinesLock for no-preload-266000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:15.837836    6147 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "no-preload-266000"
	I0920 10:37:15.837846    6147 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:15.837850    6147 fix.go:54] fixHost starting: 
	I0920 10:37:15.837968    6147 fix.go:112] recreateIfNeeded on no-preload-266000: state=Stopped err=<nil>
	W0920 10:37:15.837980    6147 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:15.846215    6147 out.go:177] * Restarting existing qemu2 VM for "no-preload-266000" ...
	I0920 10:37:15.850247    6147 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:15.850284    6147 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:8d:ff:b2:f3:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2
	I0920 10:37:15.852337    6147 main.go:141] libmachine: STDOUT: 
	I0920 10:37:15.852356    6147 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:15.852384    6147 fix.go:56] duration metric: took 14.532917ms for fixHost
	I0920 10:37:15.852388    6147 start.go:83] releasing machines lock for "no-preload-266000", held for 14.547875ms
	W0920 10:37:15.852394    6147 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:15.852423    6147 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:15.852429    6147 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:20.854631    6147 start.go:360] acquireMachinesLock for no-preload-266000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:21.751383    6147 start.go:364] duration metric: took 896.638875ms to acquireMachinesLock for "no-preload-266000"
	I0920 10:37:21.751560    6147 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:21.751584    6147 fix.go:54] fixHost starting: 
	I0920 10:37:21.752318    6147 fix.go:112] recreateIfNeeded on no-preload-266000: state=Stopped err=<nil>
	W0920 10:37:21.752343    6147 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:21.777204    6147 out.go:177] * Restarting existing qemu2 VM for "no-preload-266000" ...
	I0920 10:37:21.785277    6147 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:21.785513    6147 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:8d:ff:b2:f3:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/no-preload-266000/disk.qcow2
	I0920 10:37:21.794758    6147 main.go:141] libmachine: STDOUT: 
	I0920 10:37:21.794833    6147 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:21.794924    6147 fix.go:56] duration metric: took 43.341958ms for fixHost
	I0920 10:37:21.794944    6147 start.go:83] releasing machines lock for "no-preload-266000", held for 43.523667ms
	W0920 10:37:21.795181    6147 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-266000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-266000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:21.816313    6147 out.go:201] 
	W0920 10:37:21.820431    6147 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:21.820483    6147 out.go:270] * 
	* 
	W0920 10:37:21.823427    6147 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:21.832293    6147 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-266000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (51.1465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-358000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-358000 create -f testdata/busybox.yaml: exit status 1 (31.467833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-358000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-358000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (30.846875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (34.512667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-266000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (34.206375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-266000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-266000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-266000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.670667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-266000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-266000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (31.055959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-358000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-358000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-358000 describe deploy/metrics-server -n kube-system: exit status 1 (28.464ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-358000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-358000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (37.626959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-266000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (30.886958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-266000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-266000 --alsologtostderr -v=1: exit status 83 (48.782334ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-266000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-266000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:22.100939    6180 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:22.101076    6180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:22.101083    6180 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:22.101085    6180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:22.101230    6180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:22.101453    6180 out.go:352] Setting JSON to false
	I0920 10:37:22.101464    6180 mustload.go:65] Loading cluster: no-preload-266000
	I0920 10:37:22.101686    6180 config.go:182] Loaded profile config "no-preload-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:22.108296    6180 out.go:177] * The control-plane node no-preload-266000 host is not running: state=Stopped
	I0920 10:37:22.114251    6180 out.go:177]   To start a cluster, run: "minikube start -p no-preload-266000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-266000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (35.288959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (28.574459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-385000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-385000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.890402458s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-385000" primary control-plane node in "default-k8s-diff-port-385000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-385000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:22.534339    6212 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:22.534467    6212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:22.534470    6212 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:22.534472    6212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:22.534606    6212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:22.535681    6212 out.go:352] Setting JSON to false
	I0920 10:37:22.551751    6212 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4005,"bootTime":1726849837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:37:22.551820    6212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:37:22.557251    6212 out.go:177] * [default-k8s-diff-port-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:37:22.564202    6212 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:37:22.564255    6212 notify.go:220] Checking for updates...
	I0920 10:37:22.571250    6212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:37:22.574270    6212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:37:22.577265    6212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:37:22.580250    6212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:37:22.583243    6212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:37:22.586608    6212 config.go:182] Loaded profile config "embed-certs-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:22.586669    6212 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:22.586725    6212 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:37:22.591285    6212 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:37:22.598217    6212 start.go:297] selected driver: qemu2
	I0920 10:37:22.598223    6212 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:37:22.598229    6212 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:37:22.600484    6212 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:37:22.604236    6212 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:37:22.608245    6212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:37:22.608264    6212 cni.go:84] Creating CNI manager for ""
	I0920 10:37:22.608292    6212 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:37:22.608297    6212 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:37:22.608324    6212 start.go:340] cluster config:
	{Name:default-k8s-diff-port-385000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:22.611554    6212 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:22.618188    6212 out.go:177] * Starting "default-k8s-diff-port-385000" primary control-plane node in "default-k8s-diff-port-385000" cluster
	I0920 10:37:22.622190    6212 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:37:22.622206    6212 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:37:22.622212    6212 cache.go:56] Caching tarball of preloaded images
	I0920 10:37:22.622268    6212 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:37:22.622274    6212 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:37:22.622333    6212 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/default-k8s-diff-port-385000/config.json ...
	I0920 10:37:22.622345    6212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/default-k8s-diff-port-385000/config.json: {Name:mk4279a8d2ecefcb6a4c96b1b964fe5e001a61b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:37:22.622661    6212 start.go:360] acquireMachinesLock for default-k8s-diff-port-385000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:22.622697    6212 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "default-k8s-diff-port-385000"
	I0920 10:37:22.622709    6212 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:37:22.622740    6212 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:37:22.630200    6212 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:37:22.647132    6212 start.go:159] libmachine.API.Create for "default-k8s-diff-port-385000" (driver="qemu2")
	I0920 10:37:22.647165    6212 client.go:168] LocalClient.Create starting
	I0920 10:37:22.647239    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:37:22.647271    6212 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:22.647281    6212 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:22.647320    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:37:22.647343    6212 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:22.647350    6212 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:22.647733    6212 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:37:22.811803    6212 main.go:141] libmachine: Creating SSH key...
	I0920 10:37:22.938292    6212 main.go:141] libmachine: Creating Disk image...
	I0920 10:37:22.938299    6212 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:37:22.938489    6212 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2
	I0920 10:37:22.947801    6212 main.go:141] libmachine: STDOUT: 
	I0920 10:37:22.947824    6212 main.go:141] libmachine: STDERR: 
	I0920 10:37:22.947886    6212 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2 +20000M
	I0920 10:37:22.955863    6212 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:37:22.955875    6212 main.go:141] libmachine: STDERR: 
	I0920 10:37:22.955889    6212 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2
	I0920 10:37:22.955893    6212 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:37:22.955905    6212 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:22.955932    6212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:cd:a0:c4:32:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2
	I0920 10:37:22.957554    6212 main.go:141] libmachine: STDOUT: 
	I0920 10:37:22.957571    6212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:22.957588    6212 client.go:171] duration metric: took 310.41875ms to LocalClient.Create
	I0920 10:37:24.959813    6212 start.go:128] duration metric: took 2.337050792s to createHost
	I0920 10:37:24.959881    6212 start.go:83] releasing machines lock for "default-k8s-diff-port-385000", held for 2.337186958s
	W0920 10:37:24.959950    6212 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:24.977410    6212 out.go:177] * Deleting "default-k8s-diff-port-385000" in qemu2 ...
	W0920 10:37:25.012924    6212 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:25.012959    6212 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:30.015088    6212 start.go:360] acquireMachinesLock for default-k8s-diff-port-385000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:30.015507    6212 start.go:364] duration metric: took 341.709µs to acquireMachinesLock for "default-k8s-diff-port-385000"
	I0920 10:37:30.015653    6212 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:37:30.016044    6212 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:37:30.020723    6212 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:37:30.071145    6212 start.go:159] libmachine.API.Create for "default-k8s-diff-port-385000" (driver="qemu2")
	I0920 10:37:30.071198    6212 client.go:168] LocalClient.Create starting
	I0920 10:37:30.071330    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:37:30.071405    6212 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:30.071424    6212 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:30.071479    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:37:30.071524    6212 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:30.071544    6212 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:30.072080    6212 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:37:30.246994    6212 main.go:141] libmachine: Creating SSH key...
	I0920 10:37:30.306158    6212 main.go:141] libmachine: Creating Disk image...
	I0920 10:37:30.306167    6212 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:37:30.306368    6212 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2
	I0920 10:37:30.315685    6212 main.go:141] libmachine: STDOUT: 
	I0920 10:37:30.315706    6212 main.go:141] libmachine: STDERR: 
	I0920 10:37:30.315778    6212 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2 +20000M
	I0920 10:37:30.323684    6212 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:37:30.323703    6212 main.go:141] libmachine: STDERR: 
	I0920 10:37:30.323714    6212 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2
	I0920 10:37:30.323718    6212 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:37:30.323726    6212 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:30.323762    6212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b7:28:e7:f8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2
	I0920 10:37:30.325453    6212 main.go:141] libmachine: STDOUT: 
	I0920 10:37:30.325465    6212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:30.325478    6212 client.go:171] duration metric: took 254.273875ms to LocalClient.Create
	I0920 10:37:32.327646    6212 start.go:128] duration metric: took 2.311584916s to createHost
	I0920 10:37:32.327690    6212 start.go:83] releasing machines lock for "default-k8s-diff-port-385000", held for 2.312165875s
	W0920 10:37:32.327970    6212 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-385000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-385000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:32.346652    6212 out.go:201] 
	W0920 10:37:32.355647    6212 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:32.355686    6212 out.go:270] * 
	* 
	W0920 10:37:32.358338    6212 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:32.371589    6212 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-385000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (69.19025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-358000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-358000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.194548292s)

                                                
                                                
-- stdout --
	* [embed-certs-358000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-358000" primary control-plane node in "embed-certs-358000" cluster
	* Restarting existing qemu2 VM for "embed-certs-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:25.245940    6238 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:25.246053    6238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:25.246056    6238 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:25.246059    6238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:25.246205    6238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:25.247270    6238 out.go:352] Setting JSON to false
	I0920 10:37:25.263434    6238 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4008,"bootTime":1726849837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:37:25.263502    6238 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:37:25.268407    6238 out.go:177] * [embed-certs-358000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:37:25.275377    6238 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:37:25.275439    6238 notify.go:220] Checking for updates...
	I0920 10:37:25.283289    6238 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:37:25.290337    6238 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:37:25.293372    6238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:37:25.296300    6238 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:37:25.299359    6238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:37:25.302612    6238 config.go:182] Loaded profile config "embed-certs-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:25.302908    6238 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:37:25.307338    6238 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:37:25.314366    6238 start.go:297] selected driver: qemu2
	I0920 10:37:25.314371    6238 start.go:901] validating driver "qemu2" against &{Name:embed-certs-358000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-358000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:25.314433    6238 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:37:25.316765    6238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:37:25.316794    6238 cni.go:84] Creating CNI manager for ""
	I0920 10:37:25.316821    6238 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:37:25.316850    6238 start.go:340] cluster config:
	{Name:embed-certs-358000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-358000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:25.320496    6238 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:25.328302    6238 out.go:177] * Starting "embed-certs-358000" primary control-plane node in "embed-certs-358000" cluster
	I0920 10:37:25.332361    6238 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:37:25.332382    6238 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:37:25.332392    6238 cache.go:56] Caching tarball of preloaded images
	I0920 10:37:25.332470    6238 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:37:25.332477    6238 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:37:25.332535    6238 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/embed-certs-358000/config.json ...
	I0920 10:37:25.333061    6238 start.go:360] acquireMachinesLock for embed-certs-358000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:25.333098    6238 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "embed-certs-358000"
	I0920 10:37:25.333112    6238 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:25.333117    6238 fix.go:54] fixHost starting: 
	I0920 10:37:25.333246    6238 fix.go:112] recreateIfNeeded on embed-certs-358000: state=Stopped err=<nil>
	W0920 10:37:25.333254    6238 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:25.337348    6238 out.go:177] * Restarting existing qemu2 VM for "embed-certs-358000" ...
	I0920 10:37:25.345308    6238 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:25.345355    6238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:33:64:e9:14:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2
	I0920 10:37:25.347370    6238 main.go:141] libmachine: STDOUT: 
	I0920 10:37:25.347389    6238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:25.347426    6238 fix.go:56] duration metric: took 14.308375ms for fixHost
	I0920 10:37:25.347432    6238 start.go:83] releasing machines lock for "embed-certs-358000", held for 14.329708ms
	W0920 10:37:25.347440    6238 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:25.347480    6238 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:25.347485    6238 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:30.349588    6238 start.go:360] acquireMachinesLock for embed-certs-358000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:32.327833    6238 start.go:364] duration metric: took 1.978218625s to acquireMachinesLock for "embed-certs-358000"
	I0920 10:37:32.328060    6238 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:32.328097    6238 fix.go:54] fixHost starting: 
	I0920 10:37:32.328866    6238 fix.go:112] recreateIfNeeded on embed-certs-358000: state=Stopped err=<nil>
	W0920 10:37:32.328892    6238 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:32.351569    6238 out.go:177] * Restarting existing qemu2 VM for "embed-certs-358000" ...
	I0920 10:37:32.356977    6238 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:32.357171    6238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:33:64:e9:14:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/embed-certs-358000/disk.qcow2
	I0920 10:37:32.366192    6238 main.go:141] libmachine: STDOUT: 
	I0920 10:37:32.366266    6238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:32.366340    6238 fix.go:56] duration metric: took 38.24975ms for fixHost
	I0920 10:37:32.366359    6238 start.go:83] releasing machines lock for "embed-certs-358000", held for 38.483625ms
	W0920 10:37:32.366527    6238 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:32.379591    6238 out.go:201] 
	W0920 10:37:32.387575    6238 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:32.387654    6238 out.go:270] * 
	* 
	W0920 10:37:32.390323    6238 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:32.399595    6238 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-358000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (59.205125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-385000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-385000 create -f testdata/busybox.yaml: exit status 1 (32.28025ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-385000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-385000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (30.905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (34.717375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-358000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (35.021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-358000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-358000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-358000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.290208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-358000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-358000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (31.129916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-385000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-385000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-385000 describe deploy/metrics-server -n kube-system: exit status 1 (29.053625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-385000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-385000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (35.285625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-358000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (31.143667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-358000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-358000 --alsologtostderr -v=1: exit status 83 (47.946834ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-358000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:32.684235    6271 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:32.684402    6271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:32.684405    6271 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:32.684407    6271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:32.684538    6271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:32.684776    6271 out.go:352] Setting JSON to false
	I0920 10:37:32.684786    6271 mustload.go:65] Loading cluster: embed-certs-358000
	I0920 10:37:32.685004    6271 config.go:182] Loaded profile config "embed-certs-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:32.689025    6271 out.go:177] * The control-plane node embed-certs-358000 host is not running: state=Stopped
	I0920 10:37:32.695846    6271 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-358000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-358000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (36.554417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (27.69325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-904000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-904000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.891954625s)

                                                
                                                
-- stdout --
	* [newest-cni-904000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-904000" primary control-plane node in "newest-cni-904000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-904000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:33.009872    6294 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:33.010016    6294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:33.010020    6294 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:33.010022    6294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:33.010165    6294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:33.011270    6294 out.go:352] Setting JSON to false
	I0920 10:37:33.027491    6294 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4016,"bootTime":1726849837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:37:33.027561    6294 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:37:33.032785    6294 out.go:177] * [newest-cni-904000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:37:33.039924    6294 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:37:33.039962    6294 notify.go:220] Checking for updates...
	I0920 10:37:33.047836    6294 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:37:33.050923    6294 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:37:33.053866    6294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:37:33.056904    6294 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:37:33.059886    6294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:37:33.063139    6294 config.go:182] Loaded profile config "default-k8s-diff-port-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:33.063201    6294 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:33.063254    6294 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:37:33.067820    6294 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:37:33.073837    6294 start.go:297] selected driver: qemu2
	I0920 10:37:33.073843    6294 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:37:33.073849    6294 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:37:33.076117    6294 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0920 10:37:33.076160    6294 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0920 10:37:33.080841    6294 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:37:33.087864    6294 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 10:37:33.087881    6294 cni.go:84] Creating CNI manager for ""
	I0920 10:37:33.087905    6294 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:37:33.087910    6294 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:37:33.087944    6294 start.go:340] cluster config:
	{Name:newest-cni-904000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-904000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:33.091672    6294 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:33.098744    6294 out.go:177] * Starting "newest-cni-904000" primary control-plane node in "newest-cni-904000" cluster
	I0920 10:37:33.102804    6294 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:37:33.102820    6294 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:37:33.102827    6294 cache.go:56] Caching tarball of preloaded images
	I0920 10:37:33.102892    6294 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:37:33.102898    6294 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:37:33.102953    6294 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/newest-cni-904000/config.json ...
	I0920 10:37:33.102964    6294 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/newest-cni-904000/config.json: {Name:mk891fdc0bcbadff7717ab1c1658247b5c19aedd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:37:33.103290    6294 start.go:360] acquireMachinesLock for newest-cni-904000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:33.103324    6294 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "newest-cni-904000"
	I0920 10:37:33.103336    6294 start.go:93] Provisioning new machine with config: &{Name:newest-cni-904000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-904000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:37:33.103368    6294 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:37:33.110826    6294 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:37:33.128104    6294 start.go:159] libmachine.API.Create for "newest-cni-904000" (driver="qemu2")
	I0920 10:37:33.128131    6294 client.go:168] LocalClient.Create starting
	I0920 10:37:33.128202    6294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:37:33.128231    6294 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:33.128247    6294 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:33.128282    6294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:37:33.128305    6294 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:33.128312    6294 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:33.128660    6294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:37:33.305112    6294 main.go:141] libmachine: Creating SSH key...
	I0920 10:37:33.427736    6294 main.go:141] libmachine: Creating Disk image...
	I0920 10:37:33.427745    6294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:37:33.427947    6294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2
	I0920 10:37:33.437536    6294 main.go:141] libmachine: STDOUT: 
	I0920 10:37:33.437558    6294 main.go:141] libmachine: STDERR: 
	I0920 10:37:33.437613    6294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2 +20000M
	I0920 10:37:33.445396    6294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:37:33.445416    6294 main.go:141] libmachine: STDERR: 
	I0920 10:37:33.445427    6294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2
	I0920 10:37:33.445433    6294 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:37:33.445444    6294 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:33.445470    6294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:37:1b:1e:cf:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2
	I0920 10:37:33.447049    6294 main.go:141] libmachine: STDOUT: 
	I0920 10:37:33.447063    6294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:33.447081    6294 client.go:171] duration metric: took 318.94475ms to LocalClient.Create
	I0920 10:37:35.449246    6294 start.go:128] duration metric: took 2.345867917s to createHost
	I0920 10:37:35.449322    6294 start.go:83] releasing machines lock for "newest-cni-904000", held for 2.346000583s
	W0920 10:37:35.449389    6294 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:35.467893    6294 out.go:177] * Deleting "newest-cni-904000" in qemu2 ...
	W0920 10:37:35.500537    6294 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:35.500558    6294 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:40.502718    6294 start.go:360] acquireMachinesLock for newest-cni-904000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:40.503279    6294 start.go:364] duration metric: took 464.959µs to acquireMachinesLock for "newest-cni-904000"
	I0920 10:37:40.503405    6294 start.go:93] Provisioning new machine with config: &{Name:newest-cni-904000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-904000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:37:40.503674    6294 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:37:40.512364    6294 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:37:40.563655    6294 start.go:159] libmachine.API.Create for "newest-cni-904000" (driver="qemu2")
	I0920 10:37:40.563703    6294 client.go:168] LocalClient.Create starting
	I0920 10:37:40.563825    6294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/ca.pem
	I0920 10:37:40.563899    6294 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:40.563916    6294 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:40.563986    6294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19672-1143/.minikube/certs/cert.pem
	I0920 10:37:40.564032    6294 main.go:141] libmachine: Decoding PEM data...
	I0920 10:37:40.564046    6294 main.go:141] libmachine: Parsing certificate...
	I0920 10:37:40.564607    6294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0920 10:37:40.740747    6294 main.go:141] libmachine: Creating SSH key...
	I0920 10:37:40.787240    6294 main.go:141] libmachine: Creating Disk image...
	I0920 10:37:40.787246    6294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:37:40.787457    6294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2.raw /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2
	I0920 10:37:40.796946    6294 main.go:141] libmachine: STDOUT: 
	I0920 10:37:40.796963    6294 main.go:141] libmachine: STDERR: 
	I0920 10:37:40.797024    6294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2 +20000M
	I0920 10:37:40.805279    6294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:37:40.805297    6294 main.go:141] libmachine: STDERR: 
	I0920 10:37:40.805311    6294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2
	I0920 10:37:40.805317    6294 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:37:40.805325    6294 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:40.805362    6294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:94:18:5f:9d:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2
	I0920 10:37:40.807079    6294 main.go:141] libmachine: STDOUT: 
	I0920 10:37:40.807094    6294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:40.807109    6294 client.go:171] duration metric: took 243.40025ms to LocalClient.Create
	I0920 10:37:42.809309    6294 start.go:128] duration metric: took 2.305591417s to createHost
	I0920 10:37:42.809409    6294 start.go:83] releasing machines lock for "newest-cni-904000", held for 2.306118167s
	W0920 10:37:42.809871    6294 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-904000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-904000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:42.818555    6294 out.go:201] 
	W0920 10:37:42.837634    6294 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:42.837666    6294 out.go:270] * 
	* 
	W0920 10:37:42.840480    6294 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:42.853602    6294 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-904000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000: exit status 7 (64.958958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-904000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-385000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-385000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.403729584s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-385000" primary control-plane node in "default-k8s-diff-port-385000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-385000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-385000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:36.511693    6322 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:36.511809    6322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:36.511812    6322 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:36.511814    6322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:36.511947    6322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:36.512958    6322 out.go:352] Setting JSON to false
	I0920 10:37:36.528862    6322 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4019,"bootTime":1726849837,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:37:36.528931    6322 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:37:36.533800    6322 out.go:177] * [default-k8s-diff-port-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:37:36.540765    6322 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:37:36.540814    6322 notify.go:220] Checking for updates...
	I0920 10:37:36.546350    6322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:37:36.549711    6322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:37:36.552802    6322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:37:36.555818    6322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:37:36.558740    6322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:37:36.562120    6322 config.go:182] Loaded profile config "default-k8s-diff-port-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:36.562380    6322 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:37:36.566773    6322 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:37:36.573754    6322 start.go:297] selected driver: qemu2
	I0920 10:37:36.573760    6322 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:36.573812    6322 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:37:36.576096    6322 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:37:36.576126    6322 cni.go:84] Creating CNI manager for ""
	I0920 10:37:36.576151    6322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:37:36.576177    6322 start.go:340] cluster config:
	{Name:default-k8s-diff-port-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-385000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:36.579746    6322 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:36.586781    6322 out.go:177] * Starting "default-k8s-diff-port-385000" primary control-plane node in "default-k8s-diff-port-385000" cluster
	I0920 10:37:36.590804    6322 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:37:36.590818    6322 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:37:36.590825    6322 cache.go:56] Caching tarball of preloaded images
	I0920 10:37:36.590884    6322 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:37:36.590891    6322 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:37:36.590949    6322 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/default-k8s-diff-port-385000/config.json ...
	I0920 10:37:36.591460    6322 start.go:360] acquireMachinesLock for default-k8s-diff-port-385000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:36.591495    6322 start.go:364] duration metric: took 28.542µs to acquireMachinesLock for "default-k8s-diff-port-385000"
	I0920 10:37:36.591505    6322 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:36.591510    6322 fix.go:54] fixHost starting: 
	I0920 10:37:36.591626    6322 fix.go:112] recreateIfNeeded on default-k8s-diff-port-385000: state=Stopped err=<nil>
	W0920 10:37:36.591635    6322 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:36.595754    6322 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-385000" ...
	I0920 10:37:36.603665    6322 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:36.603697    6322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b7:28:e7:f8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2
	I0920 10:37:36.605675    6322 main.go:141] libmachine: STDOUT: 
	I0920 10:37:36.605691    6322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:36.605723    6322 fix.go:56] duration metric: took 14.212291ms for fixHost
	I0920 10:37:36.605728    6322 start.go:83] releasing machines lock for "default-k8s-diff-port-385000", held for 14.22825ms
	W0920 10:37:36.605734    6322 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:36.605774    6322 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:36.605779    6322 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:41.608017    6322 start.go:360] acquireMachinesLock for default-k8s-diff-port-385000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:42.809554    6322 start.go:364] duration metric: took 1.201415041s to acquireMachinesLock for "default-k8s-diff-port-385000"
	I0920 10:37:42.809727    6322 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:42.809748    6322 fix.go:54] fixHost starting: 
	I0920 10:37:42.810530    6322 fix.go:112] recreateIfNeeded on default-k8s-diff-port-385000: state=Stopped err=<nil>
	W0920 10:37:42.810556    6322 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:42.833514    6322 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-385000" ...
	I0920 10:37:42.841588    6322 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:42.841850    6322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b7:28:e7:f8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/default-k8s-diff-port-385000/disk.qcow2
	I0920 10:37:42.851052    6322 main.go:141] libmachine: STDOUT: 
	I0920 10:37:42.851111    6322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:42.851191    6322 fix.go:56] duration metric: took 41.443625ms for fixHost
	I0920 10:37:42.851208    6322 start.go:83] releasing machines lock for "default-k8s-diff-port-385000", held for 41.598041ms
	W0920 10:37:42.851399    6322 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-385000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-385000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:42.863540    6322 out.go:201] 
	W0920 10:37:42.867654    6322 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:42.867699    6322 out.go:270] * 
	* 
	W0920 10:37:42.869606    6322 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:42.877444    6322 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-385000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (55.465333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-385000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (39.55125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-385000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-385000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-385000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.660875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-385000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-385000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (32.414125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-385000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (29.125208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-385000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-385000 --alsologtostderr -v=1: exit status 83 (40.381ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-385000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-385000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:43.138869    6354 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:43.139024    6354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:43.139028    6354 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:43.139030    6354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:43.139159    6354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:43.139378    6354 out.go:352] Setting JSON to false
	I0920 10:37:43.139385    6354 mustload.go:65] Loading cluster: default-k8s-diff-port-385000
	I0920 10:37:43.139610    6354 config.go:182] Loaded profile config "default-k8s-diff-port-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:43.143709    6354 out.go:177] * The control-plane node default-k8s-diff-port-385000 host is not running: state=Stopped
	I0920 10:37:43.147695    6354 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-385000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-385000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (29.4735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (29.58525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-904000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-904000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.183306791s)

                                                
                                                
-- stdout --
	* [newest-cni-904000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-904000" primary control-plane node in "newest-cni-904000" cluster
	* Restarting existing qemu2 VM for "newest-cni-904000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-904000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:45.237882    6383 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:45.238014    6383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:45.238017    6383 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:45.238019    6383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:45.238153    6383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:45.239203    6383 out.go:352] Setting JSON to false
	I0920 10:37:45.255345    6383 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4028,"bootTime":1726849837,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:37:45.255413    6383 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:37:45.260825    6383 out.go:177] * [newest-cni-904000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:37:45.267667    6383 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:37:45.267734    6383 notify.go:220] Checking for updates...
	I0920 10:37:45.273757    6383 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:37:45.275305    6383 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:37:45.278798    6383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:37:45.281781    6383 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:37:45.283179    6383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:37:45.286073    6383 config.go:182] Loaded profile config "newest-cni-904000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:45.286342    6383 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:37:45.290769    6383 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:37:45.295774    6383 start.go:297] selected driver: qemu2
	I0920 10:37:45.295779    6383 start.go:901] validating driver "qemu2" against &{Name:newest-cni-904000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-904000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:45.295839    6383 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:37:45.298190    6383 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 10:37:45.298216    6383 cni.go:84] Creating CNI manager for ""
	I0920 10:37:45.298235    6383 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:37:45.298262    6383 start.go:340] cluster config:
	{Name:newest-cni-904000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-904000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:37:45.301823    6383 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:37:45.309792    6383 out.go:177] * Starting "newest-cni-904000" primary control-plane node in "newest-cni-904000" cluster
	I0920 10:37:45.313780    6383 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:37:45.313792    6383 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:37:45.313798    6383 cache.go:56] Caching tarball of preloaded images
	I0920 10:37:45.313850    6383 preload.go:172] Found /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:37:45.313855    6383 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:37:45.313917    6383 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/newest-cni-904000/config.json ...
	I0920 10:37:45.314423    6383 start.go:360] acquireMachinesLock for newest-cni-904000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:45.314459    6383 start.go:364] duration metric: took 30.084µs to acquireMachinesLock for "newest-cni-904000"
	I0920 10:37:45.314469    6383 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:45.314474    6383 fix.go:54] fixHost starting: 
	I0920 10:37:45.314602    6383 fix.go:112] recreateIfNeeded on newest-cni-904000: state=Stopped err=<nil>
	W0920 10:37:45.314612    6383 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:45.317718    6383 out.go:177] * Restarting existing qemu2 VM for "newest-cni-904000" ...
	I0920 10:37:45.325809    6383 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:45.325849    6383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:94:18:5f:9d:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2
	I0920 10:37:45.328207    6383 main.go:141] libmachine: STDOUT: 
	I0920 10:37:45.328228    6383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:45.328265    6383 fix.go:56] duration metric: took 13.789209ms for fixHost
	I0920 10:37:45.328270    6383 start.go:83] releasing machines lock for "newest-cni-904000", held for 13.806292ms
	W0920 10:37:45.328277    6383 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:45.328334    6383 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:45.328339    6383 start.go:729] Will try again in 5 seconds ...
	I0920 10:37:50.330515    6383 start.go:360] acquireMachinesLock for newest-cni-904000: {Name:mk251bb8d24677eef99c9dcaca6167fc8608a5cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:37:50.331042    6383 start.go:364] duration metric: took 417.125µs to acquireMachinesLock for "newest-cni-904000"
	I0920 10:37:50.331176    6383 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:37:50.331201    6383 fix.go:54] fixHost starting: 
	I0920 10:37:50.331943    6383 fix.go:112] recreateIfNeeded on newest-cni-904000: state=Stopped err=<nil>
	W0920 10:37:50.331970    6383 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:37:50.341170    6383 out.go:177] * Restarting existing qemu2 VM for "newest-cni-904000" ...
	I0920 10:37:50.345295    6383 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:37:50.345535    6383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:94:18:5f:9d:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19672-1143/.minikube/machines/newest-cni-904000/disk.qcow2
	I0920 10:37:50.355070    6383 main.go:141] libmachine: STDOUT: 
	I0920 10:37:50.355130    6383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:37:50.355231    6383 fix.go:56] duration metric: took 24.03425ms for fixHost
	I0920 10:37:50.355249    6383 start.go:83] releasing machines lock for "newest-cni-904000", held for 24.183083ms
	W0920 10:37:50.355412    6383 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-904000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-904000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:37:50.363263    6383 out.go:201] 
	W0920 10:37:50.367372    6383 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:37:50.367405    6383 out.go:270] * 
	* 
	W0920 10:37:50.370165    6383 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:37:50.379370    6383 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-904000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000: exit status 7 (69.725708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-904000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-904000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000: exit status 7 (30.493333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-904000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-904000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-904000 --alsologtostderr -v=1: exit status 83 (41.256917ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-904000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-904000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:37:50.562582    6403 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:37:50.562739    6403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:50.562742    6403 out.go:358] Setting ErrFile to fd 2...
	I0920 10:37:50.562744    6403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:37:50.562883    6403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:37:50.563102    6403 out.go:352] Setting JSON to false
	I0920 10:37:50.563110    6403 mustload.go:65] Loading cluster: newest-cni-904000
	I0920 10:37:50.563362    6403 config.go:182] Loaded profile config "newest-cni-904000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:37:50.567491    6403 out.go:177] * The control-plane node newest-cni-904000 host is not running: state=Stopped
	I0920 10:37:50.571461    6403 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-904000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-904000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000: exit status 7 (29.583833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-904000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000: exit status 7 (30.603666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-904000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 6.75
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 198.35
29 TestAddons/serial/Volcano 38.28
31 TestAddons/serial/GCPAuth/Namespaces 0.08
34 TestAddons/parallel/Ingress 17.27
35 TestAddons/parallel/InspektorGadget 10.31
36 TestAddons/parallel/MetricsServer 5.25
38 TestAddons/parallel/CSI 40.23
39 TestAddons/parallel/Headlamp 16.61
40 TestAddons/parallel/CloudSpanner 5.17
41 TestAddons/parallel/LocalPath 10.59
42 TestAddons/parallel/NvidiaDevicePlugin 6.19
43 TestAddons/parallel/Yakd 11.28
44 TestAddons/StoppedEnableDisable 12.4
52 TestHyperKitDriverInstallOrUpdate 11.25
55 TestErrorSpam/setup 33.61
56 TestErrorSpam/start 0.35
57 TestErrorSpam/status 0.24
58 TestErrorSpam/pause 0.66
59 TestErrorSpam/unpause 0.59
60 TestErrorSpam/stop 64.28
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 76.78
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 36.75
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.05
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.76
72 TestFunctional/serial/CacheCmd/cache/add_local 1.72
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
76 TestFunctional/serial/CacheCmd/cache/cache_reload 0.67
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 2.05
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.03
80 TestFunctional/serial/ExtraConfig 34.47
81 TestFunctional/serial/ComponentHealth 0.05
82 TestFunctional/serial/LogsCmd 0.64
83 TestFunctional/serial/LogsFileCmd 0.61
84 TestFunctional/serial/InvalidService 5.07
86 TestFunctional/parallel/ConfigCmd 0.24
87 TestFunctional/parallel/DashboardCmd 8.15
88 TestFunctional/parallel/DryRun 0.24
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.26
95 TestFunctional/parallel/AddonsCmd 0.11
96 TestFunctional/parallel/PersistentVolumeClaim 26.64
98 TestFunctional/parallel/SSHCmd 0.14
99 TestFunctional/parallel/CpCmd 0.49
101 TestFunctional/parallel/FileSync 0.07
102 TestFunctional/parallel/CertSync 0.42
106 TestFunctional/parallel/NodeLabels 0.04
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
110 TestFunctional/parallel/License 0.26
111 TestFunctional/parallel/Version/short 0.04
112 TestFunctional/parallel/Version/components 0.19
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.09
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
117 TestFunctional/parallel/ImageCommands/ImageBuild 1.87
118 TestFunctional/parallel/ImageCommands/Setup 1.84
119 TestFunctional/parallel/DockerEnv/bash 0.29
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
123 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.98
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
136 TestFunctional/parallel/ServiceCmd/List 0.13
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
139 TestFunctional/parallel/ServiceCmd/Format 0.1
140 TestFunctional/parallel/ServiceCmd/URL 0.1
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
148 TestFunctional/parallel/ProfileCmd/profile_list 0.13
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
150 TestFunctional/parallel/MountCmd/any-port 7.99
151 TestFunctional/parallel/MountCmd/specific-port 1.17
152 TestFunctional/parallel/MountCmd/VerifyCleanup 0.8
153 TestFunctional/delete_echo-server_images 0.06
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 180.73
160 TestMultiControlPlane/serial/DeployApp 32.51
161 TestMultiControlPlane/serial/PingHostFromPods 0.73
162 TestMultiControlPlane/serial/AddWorkerNode 54.17
163 TestMultiControlPlane/serial/NodeLabels 0.16
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.3
165 TestMultiControlPlane/serial/CopyFile 4.25
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.59
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 3.35
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 1.32
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.25
276 TestNoKubernetes/serial/Stop 3.26
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.66
293 TestStartStop/group/old-k8s-version/serial/Stop 1.78
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
306 TestStartStop/group/no-preload/serial/Stop 3.4
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
315 TestStartStop/group/embed-certs/serial/Stop 2.98
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.67
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
335 TestStartStop/group/newest-cni/serial/Stop 2.08
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 09:43:33.096854    1679 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 09:43:33.097276    1679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-310000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-310000: exit status 85 (96.628375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-310000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT |          |
	|         | -p download-only-310000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 09:43:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 09:43:17.830095    1683 out.go:345] Setting OutFile to fd 1 ...
	I0920 09:43:17.830239    1683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:17.830242    1683 out.go:358] Setting ErrFile to fd 2...
	I0920 09:43:17.830245    1683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:17.830372    1683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	W0920 09:43:17.830465    1683 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19672-1143/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19672-1143/.minikube/config/config.json: no such file or directory
	I0920 09:43:17.831789    1683 out.go:352] Setting JSON to true
	I0920 09:43:17.849353    1683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":760,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 09:43:17.849422    1683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 09:43:17.854772    1683 out.go:97] [download-only-310000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 09:43:17.854914    1683 notify.go:220] Checking for updates...
	W0920 09:43:17.854943    1683 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 09:43:17.858719    1683 out.go:169] MINIKUBE_LOCATION=19672
	I0920 09:43:17.863739    1683 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 09:43:17.868541    1683 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 09:43:17.872692    1683 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 09:43:17.875778    1683 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	W0920 09:43:17.880690    1683 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 09:43:17.880895    1683 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 09:43:17.885739    1683 out.go:97] Using the qemu2 driver based on user configuration
	I0920 09:43:17.885760    1683 start.go:297] selected driver: qemu2
	I0920 09:43:17.885765    1683 start.go:901] validating driver "qemu2" against <nil>
	I0920 09:43:17.885845    1683 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 09:43:17.889750    1683 out.go:169] Automatically selected the socket_vmnet network
	I0920 09:43:17.895488    1683 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 09:43:17.895580    1683 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 09:43:17.895637    1683 cni.go:84] Creating CNI manager for ""
	I0920 09:43:17.895682    1683 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 09:43:17.895735    1683 start.go:340] cluster config:
	{Name:download-only-310000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 09:43:17.900969    1683 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:43:17.904741    1683 out.go:97] Downloading VM boot image ...
	I0920 09:43:17.904759    1683 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso
	I0920 09:43:24.209856    1683 out.go:97] Starting "download-only-310000" primary control-plane node in "download-only-310000" cluster
	I0920 09:43:24.209880    1683 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 09:43:24.272642    1683 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 09:43:24.272649    1683 cache.go:56] Caching tarball of preloaded images
	I0920 09:43:24.272829    1683 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 09:43:24.276893    1683 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 09:43:24.276900    1683 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 09:43:24.381457    1683 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 09:43:31.866611    1683 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 09:43:31.866782    1683 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 09:43:32.562132    1683 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 09:43:32.562341    1683 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/download-only-310000/config.json ...
	I0920 09:43:32.562359    1683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/download-only-310000/config.json: {Name:mk2133cfae0407a99eccceb5760ad0dbcf4779df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:43:32.562603    1683 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 09:43:32.562802    1683 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0920 09:43:33.045087    1683 out.go:193] 
	W0920 09:43:33.052019    1683 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19672-1143/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0 0x1051816c0] Decompressors:map[bz2:0x140003bd770 gz:0x140003bd778 tar:0x140003bd6d0 tar.bz2:0x140003bd6e0 tar.gz:0x140003bd700 tar.xz:0x140003bd730 tar.zst:0x140003bd760 tbz2:0x140003bd6e0 tgz:0x140003bd700 txz:0x140003bd730 tzst:0x140003bd760 xz:0x140003bd780 zip:0x140003bd7c0 zst:0x140003bd788] Getters:map[file:0x14000111610 http:0x140006f21e0 https:0x140006f2230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0920 09:43:33.052048    1683 out_reason.go:110] 
	W0920 09:43:33.061869    1683 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 09:43:33.064973    1683 out.go:193] 
	
	
	* The control-plane node download-only-310000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-310000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-310000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-135000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-135000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (6.747632083s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 09:43:40.192845    1679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 09:43:40.192912    1679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-135000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-135000: exit status 85 (81.992583ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-310000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT |                     |
	|         | -p download-only-310000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| delete  | -p download-only-310000        | download-only-310000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT | 20 Sep 24 09:43 PDT |
	| start   | -o=json --download-only        | download-only-135000 | jenkins | v1.34.0 | 20 Sep 24 09:43 PDT |                     |
	|         | -p download-only-135000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 09:43:33
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 09:43:33.473013    1709 out.go:345] Setting OutFile to fd 1 ...
	I0920 09:43:33.473155    1709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:33.473159    1709 out.go:358] Setting ErrFile to fd 2...
	I0920 09:43:33.473162    1709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 09:43:33.473281    1709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 09:43:33.474339    1709 out.go:352] Setting JSON to true
	I0920 09:43:33.490497    1709 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":776,"bootTime":1726849837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 09:43:33.490563    1709 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 09:43:33.495516    1709 out.go:97] [download-only-135000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 09:43:33.495632    1709 notify.go:220] Checking for updates...
	I0920 09:43:33.499438    1709 out.go:169] MINIKUBE_LOCATION=19672
	I0920 09:43:33.502454    1709 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 09:43:33.506498    1709 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 09:43:33.509425    1709 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 09:43:33.512470    1709 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	W0920 09:43:33.518388    1709 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 09:43:33.518596    1709 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 09:43:33.521417    1709 out.go:97] Using the qemu2 driver based on user configuration
	I0920 09:43:33.521427    1709 start.go:297] selected driver: qemu2
	I0920 09:43:33.521431    1709 start.go:901] validating driver "qemu2" against <nil>
	I0920 09:43:33.521492    1709 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 09:43:33.524421    1709 out.go:169] Automatically selected the socket_vmnet network
	I0920 09:43:33.529548    1709 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 09:43:33.529649    1709 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 09:43:33.529668    1709 cni.go:84] Creating CNI manager for ""
	I0920 09:43:33.529713    1709 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 09:43:33.529720    1709 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 09:43:33.529764    1709 start.go:340] cluster config:
	{Name:download-only-135000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 09:43:33.533266    1709 iso.go:125] acquiring lock: {Name:mka8c383b458cdc0badd660a845c778cf2ca6dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:43:33.536432    1709 out.go:97] Starting "download-only-135000" primary control-plane node in "download-only-135000" cluster
	I0920 09:43:33.536439    1709 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 09:43:33.602115    1709 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 09:43:33.602127    1709 cache.go:56] Caching tarball of preloaded images
	I0920 09:43:33.602328    1709 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 09:43:33.607232    1709 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 09:43:33.607241    1709 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 09:43:33.697594    1709 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 09:43:38.275137    1709 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 09:43:38.275319    1709 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19672-1143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-135000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-135000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-135000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-649000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-649000: exit status 85 (61.109791ms)

                                                
                                                
-- stdout --
	* Profile "addons-649000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-649000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-649000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-649000: exit status 85 (56.573209ms)

                                                
                                                
-- stdout --
	* Profile "addons-649000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-649000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (198.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-649000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-649000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m18.349356167s)
--- PASS: TestAddons/Setup (198.35s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 7.868375ms
addons_test.go:843: volcano-admission stabilized in 7.895792ms
addons_test.go:851: volcano-controller stabilized in 7.9395ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-4wds2" [ce1a802b-cb7b-467f-ad4d-1be51ef997eb] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.006809167s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-snng4" [f2a7ef29-01c5-40b7-a72c-d50c1750e35b] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0080825s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-gm9nc" [2c4fee47-580a-48f5-bdc4-a5167737cc9d] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00642625s
addons_test.go:870: (dbg) Run:  kubectl --context addons-649000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-649000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-649000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [21f7397e-5016-4bc5-b619-b104088ae621] Pending
helpers_test.go:344: "test-job-nginx-0" [21f7397e-5016-4bc5-b619-b104088ae621] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [21f7397e-5016-4bc5-b619-b104088ae621] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005792541s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-arm64 -p addons-649000 addons disable volcano --alsologtostderr -v=1: (10.015892375s)
--- PASS: TestAddons/serial/Volcano (38.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-649000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-649000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-649000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-649000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-649000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6b6f45d8-c076-4732-867c-3b338706ed9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6b6f45d8-c076-4732-867c-3b338706ed9c] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009707667s
I0920 09:57:09.790550    1679 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-649000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-arm64 -p addons-649000 addons disable ingress --alsologtostderr -v=1: (7.235380209s)
--- PASS: TestAddons/parallel/Ingress (17.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6xg4l" [6a822fe9-76e8-47e0-b4e0-597ab6cd89ce] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01174225s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-649000
addons_test.go:789: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-649000: (5.297744334s)
--- PASS: TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.358291ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fkn5t" [d9d67f5f-acd2-4714-afb4-90a09a30730a] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004962125s
addons_test.go:413: (dbg) Run:  kubectl --context addons-649000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 09:56:33.992187    1679 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 09:56:33.994629    1679 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 09:56:33.994638    1679 kapi.go:107] duration metric: took 2.485ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 2.493459ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-649000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-649000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c51263aa-00b6-4fbd-acba-d3b77c57bb7d] Pending
helpers_test.go:344: "task-pv-pod" [c51263aa-00b6-4fbd-acba-d3b77c57bb7d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c51263aa-00b6-4fbd-acba-d3b77c57bb7d] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004279833s
addons_test.go:528: (dbg) Run:  kubectl --context addons-649000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-649000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-649000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-649000 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-649000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-649000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-649000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0a9bc42c-44ef-431f-bdcb-35def4b37a1d] Pending
helpers_test.go:344: "task-pv-pod-restore" [0a9bc42c-44ef-431f-bdcb-35def4b37a1d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0a9bc42c-44ef-431f-bdcb-35def4b37a1d] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008386792s
addons_test.go:570: (dbg) Run:  kubectl --context addons-649000 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-649000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-649000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-arm64 -p addons-649000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.10221325s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-649000 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-bgpfk" [881666ec-4f7c-4836-adc9-de503af6f972] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-bgpfk" [881666ec-4f7c-4836-adc9-de503af6f972] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006487417s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-arm64 -p addons-649000 addons disable headlamp --alsologtostderr -v=1: (5.276193625s)
--- PASS: TestAddons/parallel/Headlamp (16.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-nfpjv" [f73d6e89-1df4-4d6a-828a-6776ceb925df] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004940625s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-649000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-649000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-649000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3766af57-b35a-403b-adf9-70c4db0e6ba6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3766af57-b35a-403b-adf9-70c4db0e6ba6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3766af57-b35a-403b-adf9-70c4db0e6ba6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.008199917s
addons_test.go:938: (dbg) Run:  kubectl --context addons-649000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 ssh "cat /opt/local-path-provisioner/pvc-22cbbd5a-51bb-437b-855a-250da94f44d8_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-649000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-649000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kjgbc" [3b407700-9de5-47c9-a77d-fded909d90cf] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.011209584s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-649000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2gx5m" [27f7724f-994a-4360-966b-62b20e9100de] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.007850417s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-arm64 -p addons-649000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-arm64 -p addons-649000 addons disable yakd --alsologtostderr -v=1: (5.2759425s)
--- PASS: TestAddons/parallel/Yakd (11.28s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-649000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-649000: (12.211703834s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-649000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-649000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-649000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.25s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0920 10:23:07.052901    1679 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:23:07.053106    1679 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0920 10:23:09.153724    1679 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0920 10:23:09.153975    1679 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0920 10:23:09.154033    1679 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit
I0920 10:23:09.748394    1679 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10684ad40 0x10684ad40 0x10684ad40 0x10684ad40 0x10684ad40 0x10684ad40 0x10684ad40] Decompressors:map[bz2:0x1400071d790 gz:0x1400071d798 tar:0x1400071d740 tar.bz2:0x1400071d750 tar.gz:0x1400071d760 tar.xz:0x1400071d770 tar.zst:0x1400071d780 tbz2:0x1400071d750 tgz:0x1400071d760 txz:0x1400071d770 tzst:0x1400071d780 xz:0x1400071d7a0 zip:0x1400071d7b0 zst:0x1400071d7a8] Getters:map[file:0x140019e96f0 http:0x14001d8a9b0 https:0x14001d8aa00] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0920 10:23:09.748535    1679 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate732248887/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.25s)

                                                
                                    
x
+
TestErrorSpam/setup (33.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-835000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-835000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 --driver=qemu2 : (33.60847525s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (33.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 pause
--- PASS: TestErrorSpam/pause (0.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (64.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 stop: (12.208320708s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 stop: (26.030430666s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-835000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-835000 stop: (26.032676417s)
--- PASS: TestErrorSpam/stop (64.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19672-1143/.minikube/files/etc/test/nested/copy/1679/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-862000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-862000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m16.78033625s)
--- PASS: TestFunctional/serial/StartWithProxy (76.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.75s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 10:00:28.078481    1679 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-862000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-862000 --alsologtostderr -v=8: (36.75290975s)
functional_test.go:663: soft start took 36.753419583s for "functional-862000" cluster.
I0920 10:01:04.822315    1679 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.75s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-862000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-862000 cache add registry.k8s.io/pause:3.1: (1.013649166s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-862000 cache add registry.k8s.io/pause:3.3: (1.012339125s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3683263485/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cache add minikube-local-cache-test:functional-862000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-862000 cache add minikube-local-cache-test:functional-862000: (1.40696225s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cache delete minikube-local-cache-test:functional-862000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-862000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.435167ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 kubectl -- --context functional-862000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-862000 kubectl -- --context functional-862000 get pods: (2.047607833s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.05s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-862000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-862000 get pods: (1.0252615s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-862000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-862000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.468918833s)
functional_test.go:761: restart took 34.469046s for "functional-862000" cluster.
I0920 10:01:47.796739    1679 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-862000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1011675238/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-862000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-862000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-862000: exit status 115 (124.064708ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32646 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-862000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-862000 delete -f testdata/invalidsvc.yaml: (1.845352667s)
--- PASS: TestFunctional/serial/InvalidService (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 config get cpus: exit status 14 (30.165417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 config get cpus: exit status 14 (30.868ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-862000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-862000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2971: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-862000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-862000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (128.3525ms)

                                                
                                                
-- stdout --
	* [functional-862000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:02:41.965155    2942 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:02:41.965273    2942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:02:41.965277    2942 out.go:358] Setting ErrFile to fd 2...
	I0920 10:02:41.965279    2942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:02:41.965412    2942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:02:41.966673    2942 out.go:352] Setting JSON to false
	I0920 10:02:41.985534    2942 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1924,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:02:41.985614    2942 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:02:41.989998    2942 out.go:177] * [functional-862000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:02:41.997799    2942 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:02:41.997876    2942 notify.go:220] Checking for updates...
	I0920 10:02:42.004979    2942 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:02:42.007905    2942 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:02:42.014909    2942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:02:42.023896    2942 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:02:42.026949    2942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:02:42.030199    2942 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:02:42.030450    2942 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:02:42.034952    2942 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:02:42.041930    2942 start.go:297] selected driver: qemu2
	I0920 10:02:42.041936    2942 start.go:901] validating driver "qemu2" against &{Name:functional-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:02:42.041996    2942 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:02:42.048961    2942 out.go:201] 
	W0920 10:02:42.052722    2942 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 10:02:42.056838    2942 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-862000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-862000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-862000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.924291ms)

                                                
                                                
-- stdout --
	* [functional-862000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:02:42.195751    2956 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:02:42.195861    2956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:02:42.195864    2956 out.go:358] Setting ErrFile to fd 2...
	I0920 10:02:42.195866    2956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:02:42.195997    2956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
	I0920 10:02:42.197454    2956 out.go:352] Setting JSON to false
	I0920 10:02:42.215398    2956 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1925,"bootTime":1726849837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:02:42.215496    2956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:02:42.219905    2956 out.go:177] * [functional-862000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0920 10:02:42.227899    2956 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 10:02:42.227933    2956 notify.go:220] Checking for updates...
	I0920 10:02:42.232363    2956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	I0920 10:02:42.234927    2956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:02:42.237941    2956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:02:42.240965    2956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	I0920 10:02:42.247900    2956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:02:42.252146    2956 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:02:42.252404    2956 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:02:42.256916    2956 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0920 10:02:42.263884    2956 start.go:297] selected driver: qemu2
	I0920 10:02:42.263891    2956 start.go:901] validating driver "qemu2" against &{Name:functional-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:02:42.263937    2956 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:02:42.269968    2956 out.go:201] 
	W0920 10:02:42.273817    2956 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 10:02:42.277881    2956 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8744e522-77b1-4718-ba24-5d386614ba97] Running
E0920 10:02:09.575420    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008925375s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-862000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-862000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-862000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-862000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f6bcb406-f34a-4c3d-bda1-3b03ed3790ff] Pending
helpers_test.go:344: "sp-pod" [f6bcb406-f34a-4c3d-bda1-3b03ed3790ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f6bcb406-f34a-4c3d-bda1-3b03ed3790ff] Running
E0920 10:02:19.818618    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.009019583s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-862000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-862000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-862000 delete -f testdata/storage-provisioner/pod.yaml: (1.064824s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-862000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d5b7121f-a5e4-4bf6-9f2b-317d91402c7a] Pending
helpers_test.go:344: "sp-pod" [d5b7121f-a5e4-4bf6-9f2b-317d91402c7a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d5b7121f-a5e4-4bf6-9f2b-317d91402c7a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011081416s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-862000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "echo hello"
E0920 10:01:59.311773    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:01:59.318215    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:01:59.329520    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "cat /etc/hostname"
E0920 10:01:59.351683    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:01:59.395087    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh -n functional-862000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cp functional-862000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1425887753/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh -n functional-862000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh -n functional-862000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1679/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo cat /etc/test/nested/copy/1679/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1679.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo cat /etc/ssl/certs/1679.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1679.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo cat /usr/share/ca-certificates/1679.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo cat /etc/ssl/certs/16792.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo cat /usr/share/ca-certificates/16792.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-862000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 ssh "sudo systemctl is-active crio": exit status 1 (77.582875ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-862000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-862000
docker.io/kicbase/echo-server:functional-862000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-862000 image ls --format short --alsologtostderr:
I0920 10:02:44.863090    3009 out.go:345] Setting OutFile to fd 1 ...
I0920 10:02:44.863254    3009 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.863260    3009 out.go:358] Setting ErrFile to fd 2...
I0920 10:02:44.863263    3009 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.863411    3009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
I0920 10:02:44.863861    3009 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:44.863924    3009 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:44.864773    3009 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:44.864782    3009 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
I0920 10:02:44.893048    3009 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-862000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| localhost/my-image                          | functional-862000 | ba43b78596752 | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-862000 | d48fd1da39190 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kicbase/echo-server               | functional-862000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-862000 image ls --format table --alsologtostderr:
I0920 10:02:46.988287    3021 out.go:345] Setting OutFile to fd 1 ...
I0920 10:02:46.988430    3021 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:46.988434    3021 out.go:358] Setting ErrFile to fd 2...
I0920 10:02:46.988436    3021 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:46.988551    3021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
I0920 10:02:46.989004    3021 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:46.989065    3021 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:46.990021    3021 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:46.990030    3021 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
I0920 10:02:47.020783    3021 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/20 10:02:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-862000 image ls --format json --alsologtostderr:
[{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68
cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-862000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d48fd1da3919028f05a2af2ea9b6bed2d5b77debdacb351a3ff6c2e8c5b13f6d","repoDigests":[
],"repoTags":["docker.io/library/minikube-local-cache-test:functional-862000"],"size":"30"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"ba43b78596752f73cc73c0c4f6d1fd01ba3f138af6dcf4fc5fea752f4929d6a9","repoDigests":[],"repoTags":["localhost/my-image:functional-862000"],"size":"1410000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"}
]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-862000 image ls --format json --alsologtostderr:
I0920 10:02:46.902786    3019 out.go:345] Setting OutFile to fd 1 ...
I0920 10:02:46.902946    3019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:46.902950    3019 out.go:358] Setting ErrFile to fd 2...
I0920 10:02:46.902952    3019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:46.903118    3019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
I0920 10:02:46.903572    3019 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:46.903637    3019 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:46.904439    3019 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:46.904447    3019 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
I0920 10:02:46.934938    3019 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-862000 image ls --format yaml --alsologtostderr:
- id: d48fd1da3919028f05a2af2ea9b6bed2d5b77debdacb351a3ff6c2e8c5b13f6d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-862000
size: "30"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-862000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-862000 image ls --format yaml --alsologtostderr:
I0920 10:02:44.946652    3011 out.go:345] Setting OutFile to fd 1 ...
I0920 10:02:44.946822    3011 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.946826    3011 out.go:358] Setting ErrFile to fd 2...
I0920 10:02:44.946828    3011 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.946968    3011 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
I0920 10:02:44.947424    3011 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:44.947482    3011 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:44.948284    3011 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:44.948292    3011 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
I0920 10:02:44.976864    3011 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 ssh pgrep buildkitd: exit status 1 (65.770125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image build -t localhost/my-image:functional-862000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-862000 image build -t localhost/my-image:functional-862000 testdata/build --alsologtostderr: (1.722988834s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-862000 image build -t localhost/my-image:functional-862000 testdata/build --alsologtostderr:
I0920 10:02:45.095834    3015 out.go:345] Setting OutFile to fd 1 ...
I0920 10:02:45.096059    3015 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:45.096063    3015 out.go:358] Setting ErrFile to fd 2...
I0920 10:02:45.096065    3015 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:45.096216    3015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-1143/.minikube/bin
I0920 10:02:45.096635    3015 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:45.103485    3015 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:02:45.104427    3015 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:45.104437    3015 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19672-1143/.minikube/machines/functional-862000/id_rsa Username:docker}
I0920 10:02:45.134040    3015 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4087407264.tar
I0920 10:02:45.134113    3015 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 10:02:45.137710    3015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4087407264.tar
I0920 10:02:45.139072    3015 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4087407264.tar: stat -c "%s %y" /var/lib/minikube/build/build.4087407264.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4087407264.tar': No such file or directory
I0920 10:02:45.139090    3015 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4087407264.tar --> /var/lib/minikube/build/build.4087407264.tar (3072 bytes)
I0920 10:02:45.146952    3015 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4087407264
I0920 10:02:45.150123    3015 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4087407264 -xf /var/lib/minikube/build/build.4087407264.tar
I0920 10:02:45.153278    3015 docker.go:360] Building image: /var/lib/minikube/build/build.4087407264
I0920 10:02:45.153334    3015 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-862000 /var/lib/minikube/build/build.4087407264
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ba43b78596752f73cc73c0c4f6d1fd01ba3f138af6dcf4fc5fea752f4929d6a9 done
#8 naming to localhost/my-image:functional-862000 done
#8 DONE 0.0s
I0920 10:02:46.769970    3015 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-862000 /var/lib/minikube/build/build.4087407264: (1.61668325s)
I0920 10:02:46.770043    3015 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4087407264
I0920 10:02:46.775140    3015 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4087407264.tar
I0920 10:02:46.781182    3015 build_images.go:217] Built localhost/my-image:functional-862000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4087407264.tar
I0920 10:02:46.781199    3015 build_images.go:133] succeeded building to: functional-862000
I0920 10:02:46.781201    3015 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.82127825s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-862000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-862000 docker-env) && out/minikube-darwin-arm64 status -p functional-862000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-862000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-862000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-862000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7t27l" [420d5686-207b-47ab-bb61-f3bd1b1c41d5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7t27l" [420d5686-207b-47ab-bb61-f3bd1b1c41d5] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0920 10:02:01.889909    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:02:04.453284    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.008331375s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image load --daemon kicbase/echo-server:functional-862000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image load --daemon kicbase/echo-server:functional-862000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-862000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image load --daemon kicbase/echo-server:functional-862000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image save kicbase/echo-server:functional-862000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image rm kicbase/echo-server:functional-862000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-862000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 image save --daemon kicbase/echo-server:functional-862000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-862000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-862000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-862000 tunnel --alsologtostderr]
E0920 10:01:59.478515    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:01:59.639883    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:01:59.962956    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-862000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2827: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-862000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-862000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-862000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8a80524b-9657-4c2b-8f73-4c7e0aaadacf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0920 10:02:00.606442    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [8a80524b-9657-4c2b-8f73-4c7e0aaadacf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009360708s
I0920 10:02:10.530073    1679 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 service list -o json
functional_test.go:1494: Took "87.329459ms" to run "out/minikube-darwin-arm64 -p functional-862000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32293
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32293
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-862000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.244.100 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0920 10:02:10.619449    1679 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0920 10:02:10.657892    1679 config.go:182] Loaded profile config "functional-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-862000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "99.529083ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.868083ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "97.759125ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "35.487958ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port659608981/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726851754483266000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port659608981/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726851754483266000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port659608981/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726851754483266000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port659608981/001/test-1726851754483266000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.201708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 10:02:34.547946    1679 retry.go:31] will retry after 346.466041ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (84.862375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 10:02:34.981547    1679 retry.go:31] will retry after 845.071099ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 17:02 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 17:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 17:02 test-1726851754483266000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh cat /mount-9p/test-1726851754483266000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-862000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b2ecb4a2-c3f7-4ad5-b933-2a7c7bc9ab43] Pending
helpers_test.go:344: "busybox-mount" [b2ecb4a2-c3f7-4ad5-b933-2a7c7bc9ab43] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0920 10:02:40.301237    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [b2ecb4a2-c3f7-4ad5-b933-2a7c7bc9ab43] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b2ecb4a2-c3f7-4ad5-b933-2a7c7bc9ab43] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002523958s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-862000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port659608981/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3094953390/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (79.552125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 10:02:42.556022    1679 retry.go:31] will retry after 660.079611ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3094953390/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 ssh "sudo umount -f /mount-9p": exit status 1 (65.059375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-862000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3094953390/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup885131782/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup885131782/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup885131782/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T" /mount1: exit status 1 (74.458875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 10:02:43.722734    1679 retry.go:31] will retry after 465.737085ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-862000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-862000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup885131782/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup885131782/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-862000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup885131782/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-862000
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-862000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-862000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (180.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-930000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0920 10:03:21.261629    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:04:43.182102    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-930000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m0.536467042s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (180.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-930000 -- rollout status deployment/busybox: (31.052494416s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-b7z4m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-bghvw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-rnhzj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-b7z4m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-bghvw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-rnhzj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-b7z4m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-bghvw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-rnhzj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-b7z4m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-b7z4m -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-bghvw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-bghvw -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-rnhzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-930000 -- exec busybox-7dff88458-rnhzj -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-930000 -v=7 --alsologtostderr
E0920 10:06:55.627119    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:55.634782    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:55.648172    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:55.671527    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:55.714145    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:55.797479    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:55.959291    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:56.282725    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:56.926129    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:58.208089    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:06:59.299044    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/addons-649000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:07:00.770531    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:07:05.894092    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
E0920 10:07:16.137122    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-1143/.minikube/profiles/functional-862000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-930000 -v=7 --alsologtostderr: (53.94661925s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-930000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp testdata/cp-test.txt ha-930000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2621769751/001/cp-test_ha-930000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000:/home/docker/cp-test.txt ha-930000-m02:/home/docker/cp-test_ha-930000_ha-930000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m02 "sudo cat /home/docker/cp-test_ha-930000_ha-930000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000:/home/docker/cp-test.txt ha-930000-m03:/home/docker/cp-test_ha-930000_ha-930000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m03 "sudo cat /home/docker/cp-test_ha-930000_ha-930000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000:/home/docker/cp-test.txt ha-930000-m04:/home/docker/cp-test_ha-930000_ha-930000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m04 "sudo cat /home/docker/cp-test_ha-930000_ha-930000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp testdata/cp-test.txt ha-930000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2621769751/001/cp-test_ha-930000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m02:/home/docker/cp-test.txt ha-930000:/home/docker/cp-test_ha-930000-m02_ha-930000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000 "sudo cat /home/docker/cp-test_ha-930000-m02_ha-930000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m02:/home/docker/cp-test.txt ha-930000-m03:/home/docker/cp-test_ha-930000-m02_ha-930000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m03 "sudo cat /home/docker/cp-test_ha-930000-m02_ha-930000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m02:/home/docker/cp-test.txt ha-930000-m04:/home/docker/cp-test_ha-930000-m02_ha-930000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m04 "sudo cat /home/docker/cp-test_ha-930000-m02_ha-930000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp testdata/cp-test.txt ha-930000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2621769751/001/cp-test_ha-930000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m03:/home/docker/cp-test.txt ha-930000:/home/docker/cp-test_ha-930000-m03_ha-930000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000 "sudo cat /home/docker/cp-test_ha-930000-m03_ha-930000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m03:/home/docker/cp-test.txt ha-930000-m02:/home/docker/cp-test_ha-930000-m03_ha-930000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m02 "sudo cat /home/docker/cp-test_ha-930000-m03_ha-930000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m03:/home/docker/cp-test.txt ha-930000-m04:/home/docker/cp-test_ha-930000-m03_ha-930000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m04 "sudo cat /home/docker/cp-test_ha-930000-m03_ha-930000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp testdata/cp-test.txt ha-930000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2621769751/001/cp-test_ha-930000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m04:/home/docker/cp-test.txt ha-930000:/home/docker/cp-test_ha-930000-m04_ha-930000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000 "sudo cat /home/docker/cp-test_ha-930000-m04_ha-930000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m04:/home/docker/cp-test.txt ha-930000-m02:/home/docker/cp-test_ha-930000-m04_ha-930000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m02 "sudo cat /home/docker/cp-test_ha-930000-m04_ha-930000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 cp ha-930000-m04:/home/docker/cp-test.txt ha-930000-m03:/home/docker/cp-test_ha-930000-m04_ha-930000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-930000 ssh -n ha-930000-m03 "sudo cat /home/docker/cp-test_ha-930000-m04_ha-930000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (3.594137958s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-936000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-936000 --output=json --user=testUser: (3.352180625s)
--- PASS: TestJSONOutput/stop/Command (3.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-139000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-139000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.825416ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d46a14b6-834a-463c-845c-b9b6e9002b78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-139000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b6acc93-90ff-4de5-8b27-4f7f4f4c0071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"5f9683c0-27b3-412a-9eb8-dc1c29980259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig"}}
	{"specversion":"1.0","id":"641f2635-4209-4d4a-a313-abfd119e63fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1d8a9e33-9855-4306-bc6f-355fcde07019","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dde6e0cb-6c19-4ab0-8fec-6f1d602298ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube"}}
	{"specversion":"1.0","id":"12dbfca8-19d9-4855-9446-09441ea5c011","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"354ec4c2-8881-4002-ac25-50991938be30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-139000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-315000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-315000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.968209ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-315000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-1143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-1143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-315000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-315000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.97875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-315000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-315000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.631720667s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.613007917s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-315000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-315000: (3.262151625s)
--- PASS: TestNoKubernetes/serial/Stop (3.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-315000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-315000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.647042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-315000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-315000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-593000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-305000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-305000 --alsologtostderr -v=3: (1.775083459s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-305000 -n old-k8s-version-305000: exit status 7 (46.990083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-305000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-266000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-266000 --alsologtostderr -v=3: (3.402855125s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-266000 -n no-preload-266000: exit status 7 (55.401334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-266000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-358000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-358000 --alsologtostderr -v=3: (2.981404s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-358000 -n embed-certs-358000: exit status 7 (63.11575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-358000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-385000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-385000 --alsologtostderr -v=3: (3.667820416s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-385000 -n default-k8s-diff-port-385000: exit status 7 (61.509834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-385000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-904000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-904000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-904000 --alsologtostderr -v=3: (2.0797975s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-904000 -n newest-cni-904000: exit status 7 (62.027958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-904000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-692000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-692000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-692000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-692000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-692000"

                                                
                                                
----------------------- debugLogs end: cilium-692000 [took: 2.201997917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-692000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-692000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-277000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-277000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard